instruction
stringlengths 1
910
| input
stringclasses 1
value | output
stringlengths 19
270k
| system
stringclasses 1
value |
---|---|---|---|
DISABLED test_comprehensive_nn_functional_batch_norm_without_cudnn_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_native_dropout_backward_cpu_int32 (__main__.TestInductorOpInfoCPU) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 800 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_fft_hfft2_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 700 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_scatter_reduce_prod_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
Move the magma the Windows builds as well to pytorch/pytorch | <p dir="auto">I'm working on this!</p> | ||
DISABLED test_comprehensive_mvlgamma_mvlgamma_p_5_cuda_int32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_prelu_cpu_float16 (__main__.TestInductorOpInfoCPU) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 800 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_transpose_copy_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_adaptive_max_pool3d_cpu_float32 (__main__.TestInductorOpInfoCPU) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 800 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_dstack_cuda_bool (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_column_stack_cuda_int32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_amax_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_masked_softmin_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_trapezoid_cuda_int32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_fill_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
ImportError: /usr/lib64/libtorch_cuda.so: undefined symbol: cudnnSetDropoutDescriptor | <p dir="auto">When I put cuDNN within <code class="notranslate">/opt/cuda-12.6/</code> and built, <code class="notranslate">ldd</code> now shows that <code class="notranslate">/usr/lib64/libtorch_cuda.so</code> links to</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="libcudnn.so.9 => /opt/cuda-12.6/lib64/libcudnn.so.9 (0x00007f9b89800000)"><pre class="notranslate"><code class="notranslate">libcudnn.so.9 => /opt/cuda-12.6/lib64/libcudnn.so.9 (0x00007f9b89800000)
</code></pre></div>
<p dir="auto">So, the CMake setup doesn't seem to work with cuDNN in non-standard directories.</p> | ||
[dynamic][inductor][recompilations] Static guard on tensor dim size | <p dir="auto">TODO is relevant</p> | ||
DISABLED test_comprehensive_special_zeta_cuda_bool (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
[ONNX] No ONNX function found for `function sym_not` | <p dir="auto">/cc <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/justinchuby/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/justinchuby">@justinchuby</a> is this a duplicate of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2546367908" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/136572" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/136572/hovercard" href="https://github.com/pytorch/pytorch/issues/136572">#136572</a> ?</p> | ||
[ROCm] [Triton 3.2] Shmem OOM errors in flex attention | <p dir="auto">Resolved as of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2649613825" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140270" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140270/hovercard" href="https://github.com/pytorch/pytorch/pull/140270">#140270</a></p> | ||
`import torch` failed after a clean install of CUDA pytorch 2.5.1 | <p dir="auto">FYI I updated <code class="notranslate">mkl</code> and that fixed it for me</p> | ||
DISABLED test_comprehensive_prod_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1100 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
[ONNX] Optimizer fails when comparing dynamic shapes | <p dir="auto">Should be fixed. Please install the latest onnxscript. <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/titaiwangms/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/titaiwangms">@titaiwangms</a> could you confirm?</p> | ||
DISABLED test_comprehensive_scatter_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_embedding_bag_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1050 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_scatter_add_cuda_bool (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
LoweringException: AssertionError: convert FlexibleLayout to FixedLayout first when using score_mod | <p dir="auto">This is fixed on main now.</p> | ||
[triton 3.2] user-defined triton kernels mutation analysis failures | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/aakhundov/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/aakhundov">@aakhundov</a> can you take a look at this?</p>
<p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/davidberard98/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/davidberard98">@davidberard98</a> this looks like a triton version mismatch, where the new triton deprecated some API</p>
<p dir="auto">edit: ah it is from the pin update</p> | ||
DISABLED test_comprehensive_diag_embed_cuda_int64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_pca_lowrank_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1700 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_scatter_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1050 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_conv3d_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_index_reduce_prod_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
[AOTI] SymInt used in torch.cond isn't codegen | <p dir="auto">Close as it is a duplication of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2662889028" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140842" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/140842/hovercard" href="https://github.com/pytorch/pytorch/issues/140842">#140842</a></p> | ||
[AOTI] Can AOTI return scalars? | <p dir="auto">thanks for confirming. I will close this now and come back if there is a need.</p> | ||
[Break XPU] Inductor CPU test case `test_codecache.py::TestFxGraphCache::test_freezing_device_cpu` failed when build with USE_MKLDNN=ON | <p dir="auto">After double check, the failed case actually always failed on machines that <code class="notranslate">torch.ops.mkldnn._is_mkldnn_fp16_supported()</code>.<br>
The root cause is: For mahcine that has mkldnn_fp16 support, the weight_pack in mkldnn_fusion.py wroks, which result in mkldnn format tensor, then the exception BypassFxGraphCache("mkldnn tensors unpickleable") is raised, and cause the fxgraph not cached.</p>
<p dir="auto">Fixed in <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="635117157" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/39705" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/39705/hovercard" href="https://github.com/pytorch/pytorch/issues/39705">#39705</a></p> | ||
DISABLED test_comprehensive_eye_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 450 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_cumprod_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1100 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
scaled_dot_product_attention Hq must equal H | <p dir="auto">Try setting enable_gqa = True</p> | ||
DISABLED test_comprehensive_cumprod_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_bilinear_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1050 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
Hooks on param AccumulateGrad are not called when the param was called with set_ | <p dir="auto">Thanks <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/janeyx99/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/janeyx99">@janeyx99</a> for the information.<br>
Since register_post_accumulate_grad_hook is the preferred and robust way to hook for grad accumulation events, will close the issue.</p> | ||
Compile the PyTorch from source code by using Dockerfile, I got a fishy error. It seems to be related to the ld linker. | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/shysuen/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/shysuen">@shysuen</a> I'd recommend you post this question on the <a href="https://discuss.pytorch.org/" rel="nofollow">PyTorch Forums</a>, as you're more likely to receive assistance there. Note that the issues here are reserved for problems within PyTorch itself. Closing for now but feel free to reopen if discussion on the forums indicates the presence of a PyTorch bug.</p> | ||
DISABLED test_no_grad_copy (__main__.TestAutograd) | Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
<ul dir="auto">
<li>Test name: <code class="notranslate">test_no_grad_copy (__main__.TestAutograd)</code></li>
<li>Platforms for which to skip the test: dynamo</li>
<li>Disabled by <code class="notranslate">pytorch-bot[bot]</code></li>
</ul>
<p dir="auto">Within ~15 minutes, <code class="notranslate">test_no_grad_copy (__main__.TestAutograd)</code> will be disabled in PyTorch CI for these platforms: dynamo. Please verify that your test name looks correct, e.g., <code class="notranslate">test_cuda_assert_async (__main__.TestCuda)</code>.</p>
<p dir="auto">To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="Platforms: case-insensitive, list, of, platforms"><pre class="notranslate"><code class="notranslate">Platforms: case-insensitive, list, of, platforms
</code></pre></div>
<p dir="auto">We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.</p>
<h3 dir="auto">How to re-enable a test</h3>
<p dir="auto">To re-enable the test globally, close the issue. To re-enable a test for only a subset of platforms, remove the platforms from the list in the issue body. This may take some time to propagate. To re-enable a test only for a PR, put <code class="notranslate">Fixes #139733</code> in the PR body and rerun the test jobs. Note that if a test is flaky, it maybe be difficult to tell if the test is still flaky on the PR.</p>
| ||
DISABLED test_comprehensive_stft_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 950 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_deg2rad_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_addr_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 950 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_cat_cuda_int64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1100 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_cosine_similarity_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1050 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_diff_cuda_int64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_conv_transpose3d_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
Segmentation Fault using MSELoss on PyTorch on MPS | <blockquote>
<p dir="auto">Not sure who is the right judge, but does not feel like a high priority issue to me: crashes are indeed bad, but users migrating any existing frameworks are unlikely to be affected, as MSELoss for integral types doesn't make much sense...</p>
</blockquote>
<p dir="auto">Though I agree the practical use case is limited, I still this is an important issue. I was trying to use MSELoss on a float tensor but made an error that led to a long tensor instead and getting a segfault caught me by surprise.</p>
<p dir="auto">For newer programmers who may not be equipped to diagnose such problems themselves (and who use MSE very commonly), this is a bigger issue. Hopefully, the fix isn't much more difficult than some type-checking.</p> | ||
DISABLED test_comprehensive_hypot_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 450 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_batch_norm_without_cudnn_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_dist_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1050 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nansum_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 400 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_addbmm_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 950 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_var_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_cross_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_interpolate_nearest_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_new_empty_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1100 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_signal_windows_general_cosine_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
Importing ZeroRedundancyOptimizer prints deprecation warning | <p dir="auto">Thanks for reporting, <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/rationalism/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/rationalism">@rationalism</a> <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/DiWu17/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/DiWu17">@DiWu17</a>. <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2665105151" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140889" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140889/hovercard" href="https://github.com/pytorch/pytorch/pull/140889">#140889</a> should fix this.</p> | ||
DISABLED test_comprehensive_sum_to_size_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
Aliasing is not being prevented in onnx.export(dynamo=True) - potentially wrong graph exported | <p dir="auto">Fixed by <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2638863881" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/139905" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/139905/hovercard" href="https://github.com/pytorch/pytorch/pull/139905">#139905</a></p> | ||
DISABLED test_comprehensive_heaviside_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
[ROCm] [Upstream Triton] Flex attention `Assertion idx < size()' failed.` | <blockquote>
<p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/jataylo/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/jataylo">@jataylo</a> I figured out a fix here, <a href="https://github.com/triton-lang/triton/pull/5084" data-hovercard-type="pull_request" data-hovercard-url="/triton-lang/triton/pull/5084/hovercard">triton-lang/triton#5084</a></p>
<p dir="auto">The pass was hardcoding rank==2, assert came when rank==1</p>
<p dir="auto">If this is a satisfactory fix we can just go ahead with that</p>
</blockquote>
<p dir="auto">Thanks <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/SamGinzburg/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/SamGinzburg">@SamGinzburg</a>, lets get both in we'll have to change the configs num_stages selection on pytorch side either way, otherwise we oom out 50-60% of UTs.</p> | ||
aten.full_like can not be decomposed to core aten ops | <p dir="auto">I think we should add the core tag to this op</p> | ||
DISABLED test_comprehensive_scatter_reduce_mean_cuda_float16 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
Optimize peak memory for flash _scaled_dot_product_attention_math | <p dir="auto">cc <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/drisspg/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/drisspg">@drisspg</a></p> | ||
Output from nn.Linear is different from that of manual calculation | <p dir="auto"><a href="https://pytorch.org/docs/stable/generated/torch.nn.Linear.html" rel="nofollow">https://pytorch.org/docs/stable/generated/torch.nn.Linear.html</a></p>
<p dir="auto">You should use <code class="notranslate">(data @ model.weight.T) + model.bias</code></p> | ||
DISABLED test_comprehensive_nanquantile_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_prelu_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_restart_pg (__main__.ProcessGroupNCCLGroupTest) | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/shuqiangzhang/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/shuqiangzhang">@shuqiangzhang</a> can you please take a look?</p> | ||
Setting an `int` and `bool` value to `rtol` and `atol` argument of `isclose()` works against the error message | <p dir="auto">This is one of those issues where it's about Python arg parsing vs having PyTorch be aware of type promotions of python scalars. I think I had commented on one of your issues saying that the most productive way to get action from this concern is to open one umbrella issue that tracks all of these incidents, and to link these to that umbrella issue.</p>
<p dir="auto">Closing this issue and all other similar issues until there is such an issue with a link.</p> | ||
DISABLED test_manual_with_data_parallel_dp_type_DDP_ScheduleClass1_use_new_runtime_False (__main__.ComposabilityTest) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 195 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_manual_with_data_parallel_dp_type_DDP_ScheduleClass2_use_new_runtime_False (__main__.ComposabilityTest) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 195 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
[flex attention][torch.compile] LoweringException: TypeError: cannot determine truth value of Relational | <p dir="auto">Yeah closing in favor of : <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2618355801" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/139064" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/139064/hovercard" href="https://github.com/pytorch/pytorch/issues/139064">#139064</a></p> | ||
DISABLED test_cuda_tracker_equivalence (__main__.TestMemTracker) | Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
<ul dir="auto">
<li>Test name: <code class="notranslate">test_cuda_tracker_equivalence (__main__.TestMemTracker)</code></li>
<li>Platforms for which to skip the test: rocm</li>
<li>Disabled by <code class="notranslate">huydhn</code></li>
</ul>
<p dir="auto">Within ~15 minutes, <code class="notranslate">test_cuda_tracker_equivalence (__main__.TestMemTracker)</code> will be disabled in PyTorch CI for these platforms: rocm. Please verify that your test name looks correct, e.g., <code class="notranslate">test_cuda_assert_async (__main__.TestCuda)</code>.</p>
<p dir="auto">To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="Platforms: case-insensitive, list, of, platforms"><pre class="notranslate"><code class="notranslate">Platforms: case-insensitive, list, of, platforms
</code></pre></div>
<p dir="auto">We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.</p>
<h3 dir="auto">How to re-enable a test</h3>
<p dir="auto">To re-enable the test globally, close the issue. To re-enable a test for only a subset of platforms, remove the platforms from the list in the issue body. This may take some time to propagate. To re-enable a test only for a PR, put <code class="notranslate">Fixes #139515</code> in the PR body and rerun the test jobs. Note that if a test is flaky, it maybe be difficult to tell if the test is still flaky on the PR.</p>
| ||
PyTorch does not work on CUDA 12.6 | <p dir="auto">Nevermind, it can be downloaded from <a href="https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/" rel="nofollow">https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/</a></p> | ||
Add ci-no-td label on reland PR | <p dir="auto">Fixed the issue</p> | ||
[triton 3.2] test_custom_scan_op_cuda | <p dir="auto">Confirmed fixed!</p> | ||
[triton 3.2] test_block_mask_non_divisible | <p dir="auto">Confirmed fixed!</p> | ||
[triton 3.2] test_compiling_create_block_mask_no_recompile | <p dir="auto">appears to be fixed by <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2629679999" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/139502" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/139502/hovercard" href="https://github.com/pytorch/pytorch/pull/139502">#139502</a></p> | ||
Seperate grad norm computation from `torch.nn.utils.clip_grad_norm_` | <p dir="auto">Or that PR as is will not land, but I would not object a cleaned up version of it with clearer semantics (+ being out of foreach). For example, the norm part could be confused as a norm over all the values in the Tensors of the List vs a norm of norms (what it is)</p> | ||
Flex attention returns zeros for batch dimensions > 0 in certain cases | <p dir="auto">Thanks for reporting, have the bug fix PR here: <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2629871668" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/139516" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/139516/hovercard" href="https://github.com/pytorch/pytorch/pull/139516">#139516</a></p> | ||
[KINETO Profiler] Building function setTraceID has warning 'defined but not used' when USE_KINETO=0 | <blockquote>
<p dir="auto">Thanks for reporting. This was introduced by <a href="https://github.com/pytorch/pytorch/commit/ac7acfb894ecc03cc148ba5b35f051005e365fef">ac7acfb</a></p>
<p dir="auto">Also just to clarify, we should use ifdef to remove this warning, not namespace. It seems like you did that with your PR anyways.</p>
</blockquote>
<p dir="auto">Yes. Sorry for late response. This issue can be closed. Thank you. π</p> | ||
libtorch. Executing at::sum after multiplying the slices yields different results | <p dir="auto">this is expected, see <a href="https://pytorch.org/docs/stable/notes/numerical_accuracy.html" rel="nofollow">https://pytorch.org/docs/stable/notes/numerical_accuracy.html</a></p> | ||
PyTorch 2.5.1 cannot find MPS device on a MacOS 12.7.6 arm64 | <p dir="auto">You can not run pytorch-2.5+ with MPS on MacOS12, as in <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2458776303" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/133141" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/133141/hovercard" href="https://github.com/pytorch/pytorch/pull/133141">#133141</a> I've disabled MacOS 12 support, because too many things are broken in MPS framework on MacOS 12. But even if one to roll-back the change, one would not see any perf benefits/new functionality in MPS support on 2.5 compared to 2.4.</p>
<p dir="auto">But the message is really confusing, it should have said that MPS is supported starting from 13.0 (will update the error message soon)</p> | ||
DISABLED test_mutable_custom_op_fixed_layout2_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1400 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
[Export] dynamic shape divisibility problem results in excessive warnings from Ignored guard + stack trace | <div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y=None):
return x + y.reshape(x.shape)
model = TestModule().cuda()
x = torch.ones(4, 4).cuda()
y = torch.rand(2, 8).cuda()
model(x, y)
from torch.export import Dim
ep = torch.export.export(
model,
(x, y),
dynamic_shapes={"x": None, "y": {0: Dim.DYNAMIC, 1: Dim.DYNAMIC}},
strict=False,
)
torch._inductor.aoti_compile_and_package(
ep,
(x, y),
)"><pre class="notranslate"><code class="notranslate">class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y=None):
return x + y.reshape(x.shape)
model = TestModule().cuda()
x = torch.ones(4, 4).cuda()
y = torch.rand(2, 8).cuda()
model(x, y)
from torch.export import Dim
ep = torch.export.export(
model,
(x, y),
dynamic_shapes={"x": None, "y": {0: Dim.DYNAMIC, 1: Dim.DYNAMIC}},
strict=False,
)
torch._inductor.aoti_compile_and_package(
ep,
(x, y),
)
</code></pre></div>
<p dir="auto">Updated repro</p> | ||
Incorrect result of torch.count_nonzero with MPS | <p dir="auto">π€¦<br>
</p><div class="Box Box--condensed my-2">
<div class="Box-header f6">
<p class="mb-0 text-bold">
<a href="https://github.com/pytorch/pytorch/blob/8ace3e80236f548e2bbc5b09596df74fd1c267f4/aten/src/ATen/native/mps/operations/ReduceOps.mm#L198-L205">pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm</a>
</p>
<p class="mb-0 color-fg-muted">
Lines 198 to 205
in
<a data-pjax="true" class="commit-tease-sha Link--inTextBlock" href="/pytorch/pytorch/commit/8ace3e80236f548e2bbc5b09596df74fd1c267f4">8ace3e8</a>
</p>
</div>
<div itemprop="text" class="Box-body p-0 blob-wrapper blob-wrapper-embedded data">
<table class="highlight tab-size mb-0 js-file-line-container" data-tab-size="8" data-paste-markdown-skip="">
<tbody><tr class="border-0">
<td id="L198" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="198"></td>
<td id="LC198" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">if</span> (<span class="pl-c1">output_t</span>.<span class="pl-c1">numel</span>() == <span class="pl-c1">0</span> || <span class="pl-c1">input_t</span>.<span class="pl-c1">numel</span>() == <span class="pl-c1">0</span>) { </td>
</tr>
<tr class="border-0">
<td id="L199" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="199"></td>
<td id="LC199" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">if</span> (reduction_type == MPSReductionType::PROD) { </td>
</tr>
<tr class="border-0">
<td id="L200" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="200"></td>
<td id="LC200" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-c1">output_t</span>.<span class="pl-c1">fill_</span>(<span class="pl-c1">1</span>); </td>
</tr>
<tr class="border-0">
<td id="L201" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="201"></td>
<td id="LC201" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> } <span class="pl-k">else</span> <span class="pl-k">if</span> (reduction_type == MPSReductionType::SUM) { </td>
</tr>
<tr class="border-0">
<td id="L202" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="202"></td>
<td id="LC202" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-c1">output_t</span>.<span class="pl-c1">zero_</span>(); </td>
</tr>
<tr class="border-0">
<td id="L203" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="203"></td>
<td id="LC203" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> } </td>
</tr>
<tr class="border-0">
<td id="L204" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="204"></td>
<td id="LC204" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">return</span>; </td>
</tr>
<tr class="border-0">
<td id="L205" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="205"></td>
<td id="LC205" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> } </td>
</tr>
</tbody></table>
</div>
</div>
<p></p>
<p dir="auto">(I.e. op returns unintialized scalar for mean, min, max and nansum)</p> | ||
Build a non-conda package for `magma-cuda` | <p dir="auto">Current conda tarball structure:</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content=".
βββ include
βΒ Β βββ magma_auxiliary.h
βΒ Β βββ (...)
βΒ Β βββ magma_zvbatched.h
βββ info
βΒ Β βββ about.json
βΒ Β βββ files
βΒ Β βββ git
βΒ Β βββ hash_input.json
βΒ Β βββ index.json
βΒ Β βββ licenses
βΒ Β βΒ Β βββ COPYRIGHT
βΒ Β βββ paths.json
βΒ Β βββ recipe
βΒ Β βββ build.sh
βΒ Β βββ cmakelists.patch
βΒ Β βββ CMake.patch
βΒ Β βββ conda_build_config.yaml
βΒ Β βββ getrf_nbparam.patch
βΒ Β βββ getrf_shfl.patch
βΒ Β βββ meta.yaml
βΒ Β βββ meta.yaml.template
βΒ Β βββ thread_queue.patch
βββ lib
βββ libmagma.a"><pre class="notranslate"><code class="notranslate">.
βββ include
βΒ Β βββ magma_auxiliary.h
βΒ Β βββ (...)
βΒ Β βββ magma_zvbatched.h
βββ info
βΒ Β βββ about.json
βΒ Β βββ files
βΒ Β βββ git
βΒ Β βββ hash_input.json
βΒ Β βββ index.json
βΒ Β βββ licenses
βΒ Β βΒ Β βββ COPYRIGHT
βΒ Β βββ paths.json
βΒ Β βββ recipe
βΒ Β βββ build.sh
βΒ Β βββ cmakelists.patch
βΒ Β βββ CMake.patch
βΒ Β βββ conda_build_config.yaml
βΒ Β βββ getrf_nbparam.patch
βΒ Β βββ getrf_shfl.patch
βΒ Β βββ meta.yaml
βΒ Β βββ meta.yaml.template
βΒ Β βββ thread_queue.patch
βββ lib
βββ libmagma.a
</code></pre></div>
<ul dir="auto">
<li>The <code class="notranslate">lib</code> and <code class="notranslate">include</code> folders are produced by the regular build.</li>
<li>The license is available in the source, and it would be good to include.</li>
<li>The recipe could be nice to include as well.</li>
<li><code class="notranslate">info.json</code> is conda specific so we can ignore it</li>
<li><code class="notranslate">paths.json</code> is produced by conda too, it includes the checksum of all files in the packages, which is nice, but I wouldn't go down the route of producing that in a bash script</li>
</ul> | ||
Report a bug when I transfer the pytorch saved model to onnx | <p dir="auto">Please reopen if the error persists.</p> | ||
`c10::ArrayRef::ArrayRef(const Container&)` SFINAE bug | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/judicaelclair/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/judicaelclair">@judicaelclair</a> Please do and add a unit test if possible</p> | ||
[MPS] Torch 2.5.x and Nightlies are using 50% more memory and are 60% slower than 2.4.1 run Stable Diffusion | <p dir="auto">Just tried a nightly.</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="Python version 3.11.10 (main, Sep 7 2024, 08:05:54) [Clang 16.0.0 (clang-1600.0.26.3)] PyTorch version 2.6.0.dev20241107 diffusers.version is 0.31.0
Loading pipeline components...:
100%|ββββββββββββββββββββββββββ| 7/7 [00:09<00:00, 1.38s/it]
100%|ββββββββββββββββββββββββββ| 10/10 [00:53<00:00, 5.32s/it]
run time in 57.97 sec, end_mps_mem 14004.62 Mb mem increase 12353.83 Mb"><pre class="notranslate"><code class="notranslate">Python version 3.11.10 (main, Sep 7 2024, 08:05:54) [Clang 16.0.0 (clang-1600.0.26.3)] PyTorch version 2.6.0.dev20241107 diffusers.version is 0.31.0
Loading pipeline components...:
100%|ββββββββββββββββββββββββββ| 7/7 [00:09<00:00, 1.38s/it]
100%|ββββββββββββββββββββββββββ| 10/10 [00:53<00:00, 5.32s/it]
run time in 57.97 sec, end_mps_mem 14004.62 Mb mem increase 12353.83 Mb
</code></pre></div>
<p dir="auto">speed is back, if fact its improved, memory practically there ~ 3.4% .</p> | ||
[MPS] Extend autocast support to bf16 | <p dir="auto">Fix incoming.</p>
<p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/pytorchbot/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/pytorchbot">@pytorchbot</a> label 'module: mps'</p> | ||
[triton 3.2] inductor cumsum w/ upstream triton fails accuracy and device asserts | <p dir="auto">upstream triton bugs w/ reductions + debug=True still exist (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2630080271" data-permission-text="Title is private" data-url="https://github.com/triton-lang/triton/issues/5045" data-hovercard-type="issue" data-hovercard-url="/triton-lang/triton/issues/5045/hovercard" href="https://github.com/triton-lang/triton/issues/5045">triton-lang/triton#5045</a> in addition to 5033, linked above), but no longer affect Inductor after we set sanitize_overflow=False.</p> | ||
Reported code that emitted guard should not reference polyfill | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/ezyang/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/ezyang">@ezyang</a> hi, i will want take issue. i new in comunity, you can send contribution guide and installation guide</p> | ||
[Enhancement] Allow Dict[str, Any] format for `transforms` argument in `torchvision.transforms.v2.Compose` | <p dir="auto">This would also make it so your linter is unable to detect invalid str keys if you misspell any operators in your Compose pipeline, so I don't really see an advantage.</p> |