Datasets:
instruction
stringlengths 1
910
| input
stringclasses 1
value | output
stringlengths 19
270k
| system
stringclasses 1
value |
---|---|---|---|
User-defined triton kernel: inductor emits invalid python when autotune configs have empty meta-param dicts | <p dir="auto">A fix in <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2708288595" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/141824" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/141824/hovercard" href="https://github.com/pytorch/pytorch/pull/141824">#141824</a>.</p> | ||
[MPS] Autocast fails for `F.scaled_dot_product_attention` | <p dir="auto">Do you mind just proposing this PR and I'll merge it in today's nightly?</p> | ||
scaled_dot_product_attention with MATH backend is 4x~ slow down on torch 2.5.1 | <blockquote>
<p dir="auto">2.5.0 changed the default behavior of precision for intermediates (see <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2358732954" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/128922" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/128922/hovercard" href="https://github.com/pytorch/pytorch/pull/128922">#128922</a>). You can get the previous behavior by <a href="https://pytorch.org/docs/stable/notes/numerical_accuracy.html#reduced-precision-reduction-for-fp16-and-bf16-in-scaled-dot-product-attention-sdpa" rel="nofollow">setting an option</a>. Does this solve the issue for you?</p>
<p dir="auto">xref <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2521063960" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/135778" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/135778/hovercard" href="https://github.com/pytorch/pytorch/issues/135778">#135778</a></p>
</blockquote>
<p dir="auto">thanks, it work~</p> | ||
[ONNX] Fix sequence handling logic in _building | <p dir="auto"><a href="https://github.com/pytorch/pytorch/pull/138656/files">https://github.com/pytorch/pytorch/pull/138656/files</a></p> | ||
MPS Backend: NotImplementedError for aten::_sparse_coo_tensor_with_dims_and_tensors when running Whisper | <p dir="auto">I'm also in the same boat. However I noticed that whisper has had MLX support for a while now, through unofficial channels like: <a href="https://github.com/mustafaaljadery/lightning-whisper-mlx">https://github.com/mustafaaljadery/lightning-whisper-mlx</a>. If you need a short term solution maybe try those instead?</p>
<p dir="auto">The more popular ones: <a href="https://github.com/ml-explore/mlx-examples">https://github.com/ml-explore/mlx-examples</a></p> | ||
Doctests failed in XPU Linux CI test | <blockquote>
<p dir="auto">Have you checked it's not due to the broke trunk?</p>
</blockquote>
<p dir="auto">Hi <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/malfet/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/malfet">@malfet</a> , sorry for the misleading, those error messages will not led doctests failed. The CI test failure caused by another reason. Will close this issue. Thanks</p> | ||
DISABLED test_threading (__main__.TestWithNCCL) | Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
<ul dir="auto">
<li>Test name: <code class="notranslate">test_threading (__main__.TestWithNCCL)</code></li>
<li>Platforms for which to skip the test: linux</li>
<li>Disabled by <code class="notranslate">pytorch-bot[bot]</code></li>
</ul>
<p dir="auto">Within ~15 minutes, <code class="notranslate">test_threading (__main__.TestWithNCCL)</code> will be disabled in PyTorch CI for these platforms: linux. Please verify that your test name looks correct, e.g., <code class="notranslate">test_cuda_assert_async (__main__.TestCuda)</code>.</p>
<p dir="auto">To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="Platforms: case-insensitive, list, of, platforms"><pre class="notranslate"><code class="notranslate">Platforms: case-insensitive, list, of, platforms
</code></pre></div>
<p dir="auto">We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.</p>
<h3 dir="auto">How to re-enable a test</h3>
<p dir="auto">To re-enable the test globally, close the issue. To re-enable a test for only a subset of platforms, remove the platforms from the list in the issue body. This may take some time to propagate. To re-enable a test only for a PR, put <code class="notranslate">Fixes #141634</code> in the PR body and rerun the test jobs. Note that if a test is flaky, it maybe be difficult to tell if the test is still flaky on the PR.</p>
| ||
ONNX export (dynamo) not working with torchvision ops | <p dir="auto">dynamo=True is the recommended API. torchvision.nms needs to be implemented in ONNX Script in this case. <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/titaiwangms/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/titaiwangms">@titaiwangms</a> can provide more guidance.</p> | ||
torch.fx.symbolic_trace does not support dynamic constructs | <blockquote>
<p dir="auto">Yes, but export.export as input argument spec matching (the ExportGraphSignature) that says how to translate the original signature to the inner graph. So it is not as big a problem as you might think. Try calling exported_model with a single input, it will work.</p>
</blockquote>
<p dir="auto">Thank you very much for your response. The module() method was truly a delightful discovery. Thanks for your response, which resolved my doubts.</p>
<p dir="auto">Here, I'm also attaching the documentation on export.export from torch 2.5.0, so that anyone who encounters this issue later can find an answer. <a href="https://pytorch.org/docs/stable/export.html" rel="nofollow">module() method</a></p>
<p dir="auto">To any future friends who see this issue and share the same concerns as I did, you might consider using torch.export.export as an alternative to fx.symbolic_trace. Alternatively, you can first use torch.export.export to trace once, and then use torch.fx.symbolic_trace to obtain a result in the fxGraphModule format, which makes it easier to use the to_folder method to save a clean model code. just like below:</p>
<div class="highlight highlight-source-python notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="import torch
import torch.nn as nn
class SimpleModel(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.linear = nn.Linear(128, 10)
def forward(self, x):
return self.linear(x)
if __name__ == "__main__":
model = SimpleModel()
x = torch.randn(4, 128)
exported_model = torch.export.export(model, (x,))
gm = torch.fx.symbolic_trace(exported_model.module())
gm.to_folder('xxx', 'yyy')"><pre class="notranslate"><span class="pl-k">import</span> <span class="pl-s1">torch</span>
<span class="pl-k">import</span> <span class="pl-s1">torch</span>.<span class="pl-s1">nn</span> <span class="pl-k">as</span> <span class="pl-s1">nn</span>
<span class="pl-k">class</span> <span class="pl-v">SimpleModel</span>(<span class="pl-s1">nn</span>.<span class="pl-v">Module</span>):
<span class="pl-k">def</span> <span class="pl-en">__init__</span>(<span class="pl-s1">self</span>, <span class="pl-c1">*</span><span class="pl-s1">args</span>, <span class="pl-c1">**</span><span class="pl-s1">kwargs</span>):
<span class="pl-en">super</span>().<span class="pl-en">__init__</span>(<span class="pl-c1">*</span><span class="pl-s1">args</span>, <span class="pl-c1">**</span><span class="pl-s1">kwargs</span>)
<span class="pl-s1">self</span>.<span class="pl-s1">linear</span> <span class="pl-c1">=</span> <span class="pl-s1">nn</span>.<span class="pl-v">Linear</span>(<span class="pl-c1">128</span>, <span class="pl-c1">10</span>)
<span class="pl-k">def</span> <span class="pl-en">forward</span>(<span class="pl-s1">self</span>, <span class="pl-s1">x</span>):
<span class="pl-k">return</span> <span class="pl-s1">self</span>.<span class="pl-en">linear</span>(<span class="pl-s1">x</span>)
<span class="pl-k">if</span> <span class="pl-s1">__name__</span> <span class="pl-c1">==</span> <span class="pl-s">"__main__"</span>:
<span class="pl-s1">model</span> <span class="pl-c1">=</span> <span class="pl-v">SimpleModel</span>()
<span class="pl-s1">x</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-en">randn</span>(<span class="pl-c1">4</span>, <span class="pl-c1">128</span>)
<span class="pl-s1">exported_model</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-s1">export</span>.<span class="pl-en">export</span>(<span class="pl-s1">model</span>, (<span class="pl-s1">x</span>,))
<span class="pl-s1">gm</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-s1">fx</span>.<span class="pl-en">symbolic_trace</span>(<span class="pl-s1">exported_model</span>.<span class="pl-en">module</span>())
<span class="pl-s1">gm</span>.<span class="pl-en">to_folder</span>(<span class="pl-s">'xxx'</span>, <span class="pl-s">'yyy'</span>)</pre></div> | ||
Add support to load complex number when `weights_only=True` in `torch.load` | <p dir="auto">Sorry, I find this issue has already been fixed (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2663205576" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140850" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140850/hovercard" href="https://github.com/pytorch/pytorch/pull/140850">#140850</a>).</p> | ||
FlexAttention with compiled block mask is slow when varying sequence lengths | <p dir="auto">Thanks <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/drisspg/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/drisspg">@drisspg</a> <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/jbschlosser/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/jbschlosser">@jbschlosser</a>, I fixed the issue using your code and switching to the latest nightly. Performance is not far from FA2 which is great to see. I ran into nans when using float16 which did not occur with FA2, but using bfloat16 seems to be running ok.</p> | ||
MPS Regression when rendering LTXVideo (after pytorch2.4.1) | <p dir="auto">I can confirm it fails in the same way using the LTX repo code with the inference.py modified to use MPS</p> | ||
[export] A node has no users in the exported program | <p dir="auto">I'll close this issue. It seems the default behaviour for setitem is not to decompose it into slice_scatter anymore.</p> | ||
Could not run 'aten::_sparse_coo_tensor_with_dims_and_tensors' with arguments from the 'SparseMPS' backend | <p dir="auto">ok, it turns out there was a change in Cellpose 3.1 that uses 'sparse_coo_tensor()', which is not yet implemented. closing this and will post in the MPS feature issue.</p> | ||
AttributeError: module 'distutils' has no attribute '_msvccompiler' | <p dir="auto">And looks like it affects both regular and XPU builds, but somehow not nightlies, see <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2680974837" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/141286" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/141286/hovercard?comment_id=2494414883&comment_type=issue_comment" href="https://github.com/pytorch/pytorch/pull/141286#issuecomment-2494414883">#141286 (comment)</a></p> | ||
Issue with torch.load failing to resolve torch.nested._internal during checkpoint loading (Nightly PyTorch ROCm 6.2) | <p dir="auto">This issue is fixed in the newest nightly!</p> | ||
[Break XPU] xpu: build fails for XPU backend due to outdated aoti_torch/generated/c_shim_xpu.h | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/dvrogozh/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/dvrogozh">@dvrogozh</a> I already filed a PR: <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2674050411" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/141086" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/141086/hovercard" href="https://github.com/pytorch/pytorch/pull/141086">#141086</a></p> | ||
Request for Assistance: Building PyTorch 2.5.1 (libtorch + main PyTorch) with CUDA 12.6.3 and cuDNN 9.5.1 | <p dir="auto">Have you tried following <a href="https://github.com/pytorch/pytorch?tab=readme-ov-file#from-source">https://github.com/pytorch/pytorch?tab=readme-ov-file#from-source</a></p>
<p dir="auto">And please use <a href="https://discuss.pytorch.org/" rel="nofollow">https://discuss.pytorch.org/</a> to ask generic questions on how to use/build PyTorch, and use github issue to report problems with the PyTorch framework</p> | ||
FlexAttention: `CUDA error: an illegal memory access was encountered` | <p dir="auto">Maybe I worded it incorrectly, taking just one of the block_mask attributes <code class="notranslate">kv_indices</code></p>
<p dir="auto">The shape is B,H, ceil(Q_LEN,BLOCK_SIZE), ceil(KV_LEN,BLOCK_SIZE), assuming you created it with <code class="notranslate">create_block_mask</code></p>
<p dir="auto">When iterating along the k dim we only iterate across kv_blocks until we reach the end of the inputs not the KV_LEN that was given at when creating the block_mask.</p>
<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content=">>> create_block_mask(causal_mask,None,None,1,512)
BlockMask(
kv_num_blocks=torch.Size([1, 1, 1]),
kv_indices=torch.Size([1, 1, 1, 4]),
full_kv_num_blocks=torch.Size([1, 1, 1]),
full_kv_indices=torch.Size([1, 1, 1, 4]),
q_num_blocks=torch.Size([1, 1, 4]),
q_indices=torch.Size([1, 1, 4, 1]),
full_q_num_blocks=torch.Size([1, 1, 4]),
full_q_indices=torch.Size([1, 1, 4, 1]),
BLOCK_SIZE=(128, 128),
shape=(1, 1, 128, 512),
sparsity=75.00%,
mask_mod=causal_mask
)
>>> create_block_mask(causal_mask,None,None,1,512).kv_indices
tensor([[[[0, 1, 2, 3]]]], device='cuda:0', dtype=torch.int32)
>>> create_block_mask(causal_mask,None,None,1,512).kv_num_blocks
tensor([[[1]]], device='cuda:0', dtype=torch.int32)"><pre class="notranslate">>>> create_block_mask(causal_mask,None,None,1,512)
BlockMask(
kv_num_blocks=torch.Size([1, 1, 1]),
kv_indices=torch.Size([1, 1, 1, 4]),
full_kv_num_blocks=torch.Size([1, 1, 1]),
full_kv_indices=torch.Size([1, 1, 1, 4]),
q_num_blocks=torch.Size([1, 1, 4]),
q_indices=torch.Size([1, 1, 4, 1]),
full_q_num_blocks=torch.Size([1, 1, 4]),
full_q_indices=torch.Size([1, 1, 4, 1]),
BLOCK_SIZE=(128, 128),
shape=(1, 1, 128, 512),
sparsity=75.00%,
mask_mod=causal_mask
)
>>> create_block_mask(causal_mask,None,None,1,512).kv_indices
tensor([[[[0, 1, 2, 3]]]], device=<span class="pl-s"><span class="pl-pds">'</span>cuda:0<span class="pl-pds">'</span></span>, dtype=torch.int32)
>>> create_block_mask(causal_mask,None,None,1,512).kv_num_blocks
tensor([[[1]]], device=<span class="pl-s"><span class="pl-pds">'</span>cuda:0<span class="pl-pds">'</span></span>, dtype=torch.int32)</pre></div>
<p dir="auto">This is essentially saying that if your Q length is size 1 and your are doing causal masking the only possible attention scores that contribute to the final output are in the first [Q_BLOCK, KV_BLOCK]. This mask is correct for all values of query, key, value where query.size(-2) == 1 and key.size(-2) <= 512</p>
<p dir="auto">Hence why we dont error when query.size(-2) and key.size(-2) dont match what was given to <code class="notranslate">creaet_block_mask</code></p> | ||
[user empathy day] dynamo graph breaks on `dict_subclass.get(...)` | <p dir="auto"><a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2678342063" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/141217" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/141217/hovercard" href="https://github.com/pytorch/pytorch/issues/141217">#141217</a> is for more comprehensive support.</p> | ||
Possible regression of F.scaled_dot_product_attention on CPU in PyTorch 2.5 | <p dir="auto">Ohh you are right this doesn't seem right. and is isolated to the FlashAttention on CPU Impl. I have a fix here: <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2692283909" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/141519" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/141519/hovercard" href="https://github.com/pytorch/pytorch/pull/141519">#141519</a></p> | ||
[FlexAttention] Wrong results for simple block-sparse mask | <blockquote>
<p dir="auto">pip install -U --pre torch --index-url <a href="https://download.pytorch.org/whl/nightly/cu124" rel="nofollow">https://download.pytorch.org/whl/nightly/cu124</a></p>
</blockquote>
<p dir="auto">Yes, can confirm that this is not reproducible on nightly build. Related issue with low speed is reproducible.</p> | ||
torch.save error | <p dir="auto">This was fixed by <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2525461808" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/136034" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/136034/hovercard" href="https://github.com/pytorch/pytorch/pull/136034">#136034</a>, but not part of 2.5 release. So please downgrade your Python version or update to nightly</p> | ||
Failure in generating a kernel with 3 tile groups | <p dir="auto">Yeah, it seems like 3D tiling is broken. <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2699434207" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/141709" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/141709/hovercard" href="https://github.com/pytorch/pytorch/pull/141709">#141709</a> fixes this testcase, though 3D tiling is poorly tested and may have other bugs. Please report them if you find them.</p> | ||
Memory explodes when applying Linear to reshaped nested tensor | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/mahyarkoy/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/mahyarkoy">@mahyarkoy</a> Thanks for the report! I was able to track this down to NJT's <code class="notranslate">linear_backward()</code> computation. The problem is related to inefficient grad computation and it surfaces for inputs of higher dims, such as those you're passing in after unflattening the final dim into two. Opened a PR with a fix as well.</p> | ||
aten::avg_pool3d.out for the MPS | <p dir="auto">Please just mention the missing op in <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="1240333853" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/77764" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/77764/hovercard" href="https://github.com/pytorch/pytorch/issues/77764">#77764</a></p> | ||
libtorch v2.5.1: arm 32bit compilation broken `"int64_t is the same as long on Linux"` | <p dir="auto">Hi <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/malfet/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/malfet">@malfet</a> , thanks for the quick reply! I think adding <code class="notranslate">!(defined(__arm__) && !defined(__aarch64__))</code> should be a reasonably good indicator of whether we're on 32 bit ARM according to <a href="https://stackoverflow.com/a/41666292" rel="nofollow">this</a> stackoverflow answer.</p>
<p dir="auto">Shall i go with a pull request then?</p>
<p dir="auto"><strong>Edit:</strong> added <code class="notranslate">&& !defined(__aarch64__)</code> to the expression to be <em>extra</em> sure that only 32 bit arm architectures are matched.</p> | ||
[AOTInductor] SegFault when using an AOT compiled model on a different device number | <blockquote>
<p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/henrylhtsang/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/henrylhtsang">@henrylhtsang</a> I'm happy to close this Issue. Is there a related commit I can reference?</p>
</blockquote>
<p dir="auto">I am a bit nervous, so let me dig around a bit more.</p>
<p dir="auto">2.5.1 is basically 2.5 + some cherry picking. So the problem could be from a few months ago.</p> | ||
graph break when training LoRA | <p dir="auto">not really a problem, just figuring out ways to improve it. If there isnt an easy flag, then thats ok. Thank you for taking a look!</p> | ||
RelaxedUnspecConstraint infers constant shape when i am marking as dynamic | <p dir="auto">Thanks!</p> | ||
DISABLED test_backward_sum_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_sqrt_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_compile_forward_max_reduction_with_dim_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_forward_igamma_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_lgamma_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_forward_min_reduction_with_dim_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
`torch.distributed.barrier(group)` hangs with multiple nodes when using `torch==2.2.0` or later | <p dir="auto">I just tried nightly (or main). The hang cannot reproduce. Maybe the easiest way would be to upgrade your torch version.<br>
Between 2.2 and now, we did fix a couple bugs around barrier.</p> | ||
fail to build for android(arm64-v8a) | <p dir="auto">Hi, malfet! Thanks for your reply.<br>
The above errors were due to my overlooking the third-party dependency libraries when using <code class="notranslate">git clone</code>. After struggling for two days, I successfully compiled PyTorch v2.5.1 into a static library targeting arm64-v8a. However, when I tried to build my own C++ project using ndk-build, various strange errors still occurred. Anyway, thank you very much for your response.</p> | ||
Improves wording and grammar on the documentation for nn/module.py | <p dir="auto">PR for this issue<br>
<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2670237246" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140987" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140987/hovercard" href="https://github.com/pytorch/pytorch/pull/140987">#140987</a></p> | ||
Not all methods from `mps/operations/MultiTensorApply.h` are covered by tests | <p dir="auto">Thank you confirming, I guess it's a question of <code class="notranslate">test_optim.py</code> is not running on MPS shard</p> | ||
Failure to deploy Self-Hosted runners - High Queue Times and canceled jobs | <p dir="auto">You can disable the deployment workflow to prevent it from running until the issue is resolved I think. I don't have the owner permission to do so.</p> | ||
DISABLED test_backward_special_xlog1py_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_compile_forward_special_modified_bessel_k0_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_polygamma_polygamma_n_3_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_special_entr_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_forward_special_chebyshev_polynomial_u_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_special_polygamma_special_polygamma_n_0_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
Segmentation fault (core dumped) in `reflection_pad1d_backward` | <p dir="auto">Closing as duplicate of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2668548452" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140945" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/140945/hovercard" href="https://github.com/pytorch/pytorch/issues/140945">#140945</a> , which seems very similar (i.e. error checking of inputs again)</p> | ||
Request for Sparse Tensor Support in MPS Backend | <p dir="auto">Closing as duplicate of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2382498378" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/129842" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/129842/hovercard" href="https://github.com/pytorch/pytorch/issues/129842">#129842</a></p> | ||
Flex attention blocksize | <p dir="auto">I believe we've also changed/fixed(?) this behavior on nightly, so closing this issue for now. Feel free to re-open if there's another issue here!</p> | ||
When I upgraded torch to 2.3.1, [F.conv2d] performance got really bad! | <p dir="auto">Can not reproduce it on neither x86 nor ARM cpus, and to the best of my knowledge we do not build binaries for Ascend 910B4, which likely include some code which is missing in this repository. So please file an issue against project that produces those builds. If they will get back to you with "It's a pure PyTorch build with such and such flags", please reopen this issue or create a new one.</p> | ||
The RNN implementation example in the documentation may be incorrect | <p dir="auto">Thanks! Closing as this reports issue that is duplicate of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2599661790" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/138401" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/138401/hovercard" href="https://github.com/pytorch/pytorch/issues/138401">#138401</a> and <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2554491072" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/136926" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/136926/hovercard" href="https://github.com/pytorch/pytorch/issues/136926">#136926</a></p>
<p dir="auto">There is a PR open to fix this <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2555354755" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/136971" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/136971/hovercard" href="https://github.com/pytorch/pytorch/pull/136971">#136971</a>, will work towards merging that one with <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2554491072" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/136926" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/136926/hovercard" href="https://github.com/pytorch/pytorch/issues/136926">#136926</a> as the issue tracking this</p> | ||
Error reported by strict ONNX shape inference: [ShapeInferenceError] (op_type:MatMul) | <p dir="auto">Please test with <code class="notranslate">torch.onnx.export(..., dynamo=True, report=True)</code> using the latest torch-nightly. Attach the generated report if there is an error. Thanks!</p> | ||
View size is not compatible, using Conv1d on channels-last Tensor, reported during backward on mps. | <p dir="auto">I have a fix, at least on MacOS15...</p> | ||
Are you sure that 'metric_logging' is importable from module 'torchtune.utils'? | <p dir="auto">Please submit your issue to torchtune: <a href="https://github.com/pytorch/torchtune">https://github.com/pytorch/torchtune</a></p> | ||
Nightly builds missing from PyTorch cu121 repository since November 12, 2024 | <p dir="auto">Yes, this is expected, and result of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2613254312" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/138899" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/138899/hovercard" href="https://github.com/pytorch/pytorch/pull/138899">#138899</a><br>
See <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2606022388" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/138609" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/138609/hovercard" href="https://github.com/pytorch/pytorch/issues/138609">#138609</a> : for upcoming 2.6 release, CUDA-11.8, CUDA-12.4 and CUDA-12.6 binaries will be released</p> | ||
vmap and compile generate different outputs from non-vmapped function | <p dir="auto">Why they're not equal: I guess this is just a floating-point thing, not sure I can pinpoint exactly what is going on but it's not unexpected.<br>
If they're close you're just fine. If I set the dtype to double and print the norm of the two checks you're doing I'm getting 8.0464e-16 and 2.3044e-15, which is very small.</p>
<p dir="auto">RE time spent in the operations, compile needs to be run (at least) once before you measure the runtime.<br>
See this modifies example:</p>
<div class="highlight highlight-source-python notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="import time
import torch
import torch.nn as nn
from optree import tree_map
torch.manual_seed(0)
# torch.set_default_dtype(torch.double)
def test_func():
def make_mlp(**kwargs):
layers = []
last_dim = kwargs["in_features"]
for i in range(len(kwargs["num_cells"])):
layers.append(nn.Linear(last_dim, kwargs["num_cells"][i]))
layers.append(kwargs["activation_class"]())
last_dim = kwargs["num_cells"][i]
layers.append(nn.Linear(last_dim, kwargs["out_features"]))
return nn.Sequential(*layers)
mlp_kwargs = {
"num_cells": [256, 256, 256],
"in_features": 30,
"out_features": 1,
"activation_class": nn.ReLU,
}
critic_1 = make_mlp(**mlp_kwargs)
critic_2 = make_mlp(**mlp_kwargs)
# compile vmap function
critic_params = tree_map(lambda *x: torch.stack(x), *[critic_1.state_dict(), critic_2.state_dict()])
critic_call = lambda params, inputs: torch.func.functional_call(critic_1, params, inputs)
critic_call_vmap = lambda x: torch.vmap(critic_call, (0, None), randomness="same")(critic_params, x)
critic_call_vmap_compile = torch.compile(critic_call_vmap)
# compile separate call
critic_call_separate = lambda x: torch.stack([critic_1(x), critic_2(x)])
critic_call_separate_compile = torch.compile(critic_call_separate)
# generate random inputs
batch_size = 4096
x = torch.randn(batch_size, mlp_kwargs["in_features"])
# call once to compile
for i in range(1):
y_separate = critic_call_separate_compile(x)
y_vmap = critic_call_vmap_compile(x)
# separate forward
start = time.time()
y_separate = critic_call_separate(x)
t_separate = time.time() - start
# separate forward with compile
_ = critic_call_separate_compile(x)
_ = critic_call_separate_compile(x)
start = time.time()
y_separate_compile = critic_call_separate_compile(x)
t_separate_compile = time.time() - start
# vmap forward
_ = critic_call_vmap(x)
_ = critic_call_vmap(x)
start = time.time()
y_vmap = critic_call_vmap(x)
t_vmap = time.time() - start
# vmap forward with compile
_ = critic_call_vmap_compile(x)
_ = critic_call_vmap_compile(x)
start = time.time()
y_vmap_compile = critic_call_vmap_compile(x)
t_vmap_compile = time.time() - start
print("time separate", t_separate)
print("time separate compile", t_separate_compile)
print("time vmap", t_vmap)
print("time vmap compile", t_vmap_compile)
print()
print("y_separate == y_separate_compile:", torch.all(y_separate == y_separate_compile))
print("y_vmap == y_vmap_compile:", (y_vmap - y_vmap_compile).norm())
print("y_separate == y_vmap:", (y_separate - y_vmap).norm())
print()
print("y_separate, y_vmap are close:", torch.allclose(y_separate, y_vmap, atol=1e-6))
print("y_vmap, y_vmap_compile are close:", torch.allclose(y_vmap, y_vmap_compile, atol=1e-6))
test_func()"><pre class="notranslate"><span class="pl-k">import</span> <span class="pl-s1">time</span>
<span class="pl-k">import</span> <span class="pl-s1">torch</span>
<span class="pl-k">import</span> <span class="pl-s1">torch</span>.<span class="pl-s1">nn</span> <span class="pl-k">as</span> <span class="pl-s1">nn</span>
<span class="pl-k">from</span> <span class="pl-s1">optree</span> <span class="pl-k">import</span> <span class="pl-s1">tree_map</span>
<span class="pl-s1">torch</span>.<span class="pl-en">manual_seed</span>(<span class="pl-c1">0</span>)
<span class="pl-c"># torch.set_default_dtype(torch.double)</span>
<span class="pl-k">def</span> <span class="pl-en">test_func</span>():
<span class="pl-k">def</span> <span class="pl-en">make_mlp</span>(<span class="pl-c1">**</span><span class="pl-s1">kwargs</span>):
<span class="pl-s1">layers</span> <span class="pl-c1">=</span> []
<span class="pl-s1">last_dim</span> <span class="pl-c1">=</span> <span class="pl-s1">kwargs</span>[<span class="pl-s">"in_features"</span>]
<span class="pl-k">for</span> <span class="pl-s1">i</span> <span class="pl-c1">in</span> <span class="pl-en">range</span>(<span class="pl-en">len</span>(<span class="pl-s1">kwargs</span>[<span class="pl-s">"num_cells"</span>])):
<span class="pl-s1">layers</span>.<span class="pl-en">append</span>(<span class="pl-s1">nn</span>.<span class="pl-v">Linear</span>(<span class="pl-s1">last_dim</span>, <span class="pl-s1">kwargs</span>[<span class="pl-s">"num_cells"</span>][<span class="pl-s1">i</span>]))
<span class="pl-s1">layers</span>.<span class="pl-en">append</span>(<span class="pl-s1">kwargs</span>[<span class="pl-s">"activation_class"</span>]())
<span class="pl-s1">last_dim</span> <span class="pl-c1">=</span> <span class="pl-s1">kwargs</span>[<span class="pl-s">"num_cells"</span>][<span class="pl-s1">i</span>]
<span class="pl-s1">layers</span>.<span class="pl-en">append</span>(<span class="pl-s1">nn</span>.<span class="pl-v">Linear</span>(<span class="pl-s1">last_dim</span>, <span class="pl-s1">kwargs</span>[<span class="pl-s">"out_features"</span>]))
<span class="pl-k">return</span> <span class="pl-s1">nn</span>.<span class="pl-v">Sequential</span>(<span class="pl-c1">*</span><span class="pl-s1">layers</span>)
<span class="pl-s1">mlp_kwargs</span> <span class="pl-c1">=</span> {
<span class="pl-s">"num_cells"</span>: [<span class="pl-c1">256</span>, <span class="pl-c1">256</span>, <span class="pl-c1">256</span>],
<span class="pl-s">"in_features"</span>: <span class="pl-c1">30</span>,
<span class="pl-s">"out_features"</span>: <span class="pl-c1">1</span>,
<span class="pl-s">"activation_class"</span>: <span class="pl-s1">nn</span>.<span class="pl-v">ReLU</span>,
}
<span class="pl-s1">critic_1</span> <span class="pl-c1">=</span> <span class="pl-en">make_mlp</span>(<span class="pl-c1">**</span><span class="pl-s1">mlp_kwargs</span>)
<span class="pl-s1">critic_2</span> <span class="pl-c1">=</span> <span class="pl-en">make_mlp</span>(<span class="pl-c1">**</span><span class="pl-s1">mlp_kwargs</span>)
<span class="pl-c"># compile vmap function</span>
<span class="pl-s1">critic_params</span> <span class="pl-c1">=</span> <span class="pl-en">tree_map</span>(<span class="pl-k">lambda</span> <span class="pl-c1">*</span><span class="pl-s1">x</span>: <span class="pl-s1">torch</span>.<span class="pl-en">stack</span>(<span class="pl-s1">x</span>), <span class="pl-c1">*</span>[<span class="pl-s1">critic_1</span>.<span class="pl-en">state_dict</span>(), <span class="pl-s1">critic_2</span>.<span class="pl-en">state_dict</span>()])
<span class="pl-s1">critic_call</span> <span class="pl-c1">=</span> <span class="pl-k">lambda</span> <span class="pl-s1">params</span>, <span class="pl-s1">inputs</span>: <span class="pl-s1">torch</span>.<span class="pl-s1">func</span>.<span class="pl-en">functional_call</span>(<span class="pl-s1">critic_1</span>, <span class="pl-s1">params</span>, <span class="pl-s1">inputs</span>)
<span class="pl-s1">critic_call_vmap</span> <span class="pl-c1">=</span> <span class="pl-k">lambda</span> <span class="pl-s1">x</span>: <span class="pl-s1">torch</span>.<span class="pl-en">vmap</span>(<span class="pl-s1">critic_call</span>, (<span class="pl-c1">0</span>, <span class="pl-c1">None</span>), <span class="pl-s1">randomness</span><span class="pl-c1">=</span><span class="pl-s">"same"</span>)(<span class="pl-s1">critic_params</span>, <span class="pl-s1">x</span>)
<span class="pl-s1">critic_call_vmap_compile</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-en">compile</span>(<span class="pl-s1">critic_call_vmap</span>)
<span class="pl-c"># compile separate call</span>
<span class="pl-s1">critic_call_separate</span> <span class="pl-c1">=</span> <span class="pl-k">lambda</span> <span class="pl-s1">x</span>: <span class="pl-s1">torch</span>.<span class="pl-en">stack</span>([<span class="pl-en">critic_1</span>(<span class="pl-s1">x</span>), <span class="pl-en">critic_2</span>(<span class="pl-s1">x</span>)])
<span class="pl-s1">critic_call_separate_compile</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-en">compile</span>(<span class="pl-s1">critic_call_separate</span>)
<span class="pl-c"># generate random inputs</span>
<span class="pl-s1">batch_size</span> <span class="pl-c1">=</span> <span class="pl-c1">4096</span>
<span class="pl-s1">x</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-en">randn</span>(<span class="pl-s1">batch_size</span>, <span class="pl-s1">mlp_kwargs</span>[<span class="pl-s">"in_features"</span>])
<span class="pl-c"># call once to compile</span>
<span class="pl-k">for</span> <span class="pl-s1">i</span> <span class="pl-c1">in</span> <span class="pl-en">range</span>(<span class="pl-c1">1</span>):
<span class="pl-s1">y_separate</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_separate_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">y_vmap</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-c"># separate forward</span>
<span class="pl-s1">start</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>()
<span class="pl-s1">y_separate</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_separate</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">t_separate</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>() <span class="pl-c1">-</span> <span class="pl-s1">start</span>
<span class="pl-c"># separate forward with compile</span>
<span class="pl-s1">_</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_separate_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">_</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_separate_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">start</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>()
<span class="pl-s1">y_separate_compile</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_separate_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">t_separate_compile</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>() <span class="pl-c1">-</span> <span class="pl-s1">start</span>
<span class="pl-c"># vmap forward</span>
<span class="pl-s1">_</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">_</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">start</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>()
<span class="pl-s1">y_vmap</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">t_vmap</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>() <span class="pl-c1">-</span> <span class="pl-s1">start</span>
<span class="pl-c"># vmap forward with compile</span>
<span class="pl-s1">_</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">_</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">start</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>()
<span class="pl-s1">y_vmap_compile</span> <span class="pl-c1">=</span> <span class="pl-en">critic_call_vmap_compile</span>(<span class="pl-s1">x</span>)
<span class="pl-s1">t_vmap_compile</span> <span class="pl-c1">=</span> <span class="pl-s1">time</span>.<span class="pl-en">time</span>() <span class="pl-c1">-</span> <span class="pl-s1">start</span>
<span class="pl-en">print</span>(<span class="pl-s">"time separate"</span>, <span class="pl-s1">t_separate</span>)
<span class="pl-en">print</span>(<span class="pl-s">"time separate compile"</span>, <span class="pl-s1">t_separate_compile</span>)
<span class="pl-en">print</span>(<span class="pl-s">"time vmap"</span>, <span class="pl-s1">t_vmap</span>)
<span class="pl-en">print</span>(<span class="pl-s">"time vmap compile"</span>, <span class="pl-s1">t_vmap_compile</span>)
<span class="pl-en">print</span>()
<span class="pl-en">print</span>(<span class="pl-s">"y_separate == y_separate_compile:"</span>, <span class="pl-s1">torch</span>.<span class="pl-en">all</span>(<span class="pl-s1">y_separate</span> <span class="pl-c1">==</span> <span class="pl-s1">y_separate_compile</span>))
<span class="pl-en">print</span>(<span class="pl-s">"y_vmap == y_vmap_compile:"</span>, (<span class="pl-s1">y_vmap</span> <span class="pl-c1">-</span> <span class="pl-s1">y_vmap_compile</span>).<span class="pl-en">norm</span>())
<span class="pl-en">print</span>(<span class="pl-s">"y_separate == y_vmap:"</span>, (<span class="pl-s1">y_separate</span> <span class="pl-c1">-</span> <span class="pl-s1">y_vmap</span>).<span class="pl-en">norm</span>())
<span class="pl-en">print</span>()
<span class="pl-en">print</span>(<span class="pl-s">"y_separate, y_vmap are close:"</span>, <span class="pl-s1">torch</span>.<span class="pl-en">allclose</span>(<span class="pl-s1">y_separate</span>, <span class="pl-s1">y_vmap</span>, <span class="pl-s1">atol</span><span class="pl-c1">=</span><span class="pl-c1">1e-6</span>))
<span class="pl-en">print</span>(<span class="pl-s">"y_vmap, y_vmap_compile are close:"</span>, <span class="pl-s1">torch</span>.<span class="pl-en">allclose</span>(<span class="pl-s1">y_vmap</span>, <span class="pl-s1">y_vmap_compile</span>, <span class="pl-s1">atol</span><span class="pl-c1">=</span><span class="pl-c1">1e-6</span>))
<span class="pl-en">test_func</span>()</pre></div>
<p dir="auto">print:</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="time separate 0.0056340694427490234
time separate compile 0.0038177967071533203
time vmap 0.0055561065673828125
time vmap compile 0.0028247833251953125
y_separate == y_separate_compile: tensor(True)
y_vmap == y_vmap_compile: tensor(4.1044e-07)
y_separate == y_vmap: tensor(1.3330e-06, grad_fn=<LinalgVectorNormBackward0>)
y_separate, y_vmap are close: True
y_vmap, y_vmap_compile are close: True"><pre class="notranslate"><code class="notranslate">time separate 0.0056340694427490234
time separate compile 0.0038177967071533203
time vmap 0.0055561065673828125
time vmap compile 0.0028247833251953125
y_separate == y_separate_compile: tensor(True)
y_vmap == y_vmap_compile: tensor(4.1044e-07)
y_separate == y_vmap: tensor(1.3330e-06, grad_fn=<LinalgVectorNormBackward0>)
y_separate, y_vmap are close: True
y_vmap, y_vmap_compile are close: True
</code></pre></div>
<p dir="auto">you can also try with <code class="notranslate">mode="reduce-overhead"</code> in compile and you should get some more speedup (on CUDA).</p> | ||
[XPU] Failed to build libtorch program with XPUs | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/piDack/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/piDack">@piDack</a> : Please, attach full log from cmake. Package checks stage is important to look at.</p>
<p dir="auto"><code class="notranslate">torch::xpurt</code> is defined in <a href="https://github.com/pytorch/pytorch/blob/v2.5.1/cmake/public/xpu.cmake">https://github.com/pytorch/pytorch/blob/v2.5.1/cmake/public/xpu.cmake</a>. Note that there is a prerequisite on line 9:<br>
</p><div class="Box Box--condensed my-2">
<div class="Box-header f6">
<p class="mb-0 text-bold">
<a href="https://github.com/pytorch/pytorch/blob/a8d6afb511a69687bbb2b7e88a3cf67917e1697e/cmake/public/xpu.cmake#L9-L13">pytorch/cmake/public/xpu.cmake</a>
</p>
<p class="mb-0 color-fg-muted">
Lines 9 to 13
in
<a data-pjax="true" class="commit-tease-sha Link--inTextBlock" href="/pytorch/pytorch/commit/a8d6afb511a69687bbb2b7e88a3cf67917e1697e">a8d6afb</a>
</p>
</div>
<div itemprop="text" class="Box-body p-0 blob-wrapper blob-wrapper-embedded data">
<table class="highlight tab-size mb-0 js-file-line-container" data-tab-size="8" data-paste-markdown-skip="">
<tbody><tr class="border-0">
<td id="L9" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="9"></td>
<td id="LC9" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-c1">find_package</span>(SYCLToolkit <span class="pl-k">REQUIRED</span>) </td>
</tr>
<tr class="border-0">
<td id="L10" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="10"></td>
<td id="LC10" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">if</span>(<span class="pl-k">NOT</span> SYCL_FOUND) </td>
</tr>
<tr class="border-0">
<td id="L11" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="11"></td>
<td id="LC11" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-c1">set</span>(PYTORCH_FOUND_XPU <span class="pl-c1">FALSE</span>) </td>
</tr>
<tr class="border-0">
<td id="L12" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="12"></td>
<td id="LC12" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">return</span>() </td>
</tr>
<tr class="border-0">
<td id="L13" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="13"></td>
<td id="LC13" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">endif</span>() </td>
</tr>
</tbody></table>
</div>
</div>
<p></p>
<p dir="auto">If sycl toolkit won't be found, then <code class="notranslate">torch::xpurt</code> won't be defined. I guess you might not have SYCL development environment available or not activated. Hence this issue. Try to follow <a href="https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html" rel="nofollow">https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html</a> to install it.</p> | ||
[DTensor] DTensor working with fused AdamW has unexpected CPU memory usage. | <p dir="auto">I think you need to patch <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2471073484" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/133728" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/133728/hovercard" href="https://github.com/pytorch/pytorch/pull/133728">#133728</a>. It should be fixed in newer version.</p> | ||
DISABLED test_backward_fmin_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_compile_forward_argmin_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_sigmoid_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_special_i0e_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
[ROCm] Incorrect number of arguments passed to kernel | <p dir="auto">Perfect thank you for investigating this <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/aakhundov/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/aakhundov">@aakhundov</a>!</p> | ||
No Python3.8 build for Pytorch 2.5.x | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/ghostplant/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/ghostplant">@ghostplant</a> I'm still using Ubuntu-20.04, but it has a python-3.9 as part of the distro, though it's not installed by default: <a href="https://packages.ubuntu.com/focal/python3.9" rel="nofollow">https://packages.ubuntu.com/focal/python3.9</a></p> | ||
Weird Memory Leak (OOM) after minor code change [CUDA] | <p dir="auto">I was able to resolve the memory leak.<br>
The issue seems to have been cloning/reusing a tensor with a large computation graph attached.<br>
It took me a bit to figure this out as I did the same before the change. Making the computation that goes in slightly more complicated resulted in this behavior. I still don't fully understand why that is the case.</p>
<p dir="auto">However ... it is resolved so I'm happy 😄</p>
<p dir="auto">The code I replaced in decode_center is this:</p>
<div class="highlight highlight-source-python notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content=" new_x = torch.zeros_like(x, requires_grad=False)
new_x[..., 0] = x[..., 0].detach()
new_x[..., 1] = ((x[..., 1] / x[..., 0].clamp(min=1e-3)) + center_pol_azi[..., 0]).detach()
new_x[..., 2] = ((x[..., 2] / x[..., 0].clamp(min=1e-3)) + center_pol_azi[..., 1]).detach()
return new_x"><pre class="notranslate"> <span class="pl-s1">new_x</span> <span class="pl-c1">=</span> <span class="pl-s1">torch</span>.<span class="pl-en">zeros_like</span>(<span class="pl-s1">x</span>, <span class="pl-s1">requires_grad</span><span class="pl-c1">=</span><span class="pl-c1">False</span>)
<span class="pl-s1">new_x</span>[..., <span class="pl-c1">0</span>] <span class="pl-c1">=</span> <span class="pl-s1">x</span>[..., <span class="pl-c1">0</span>].<span class="pl-en">detach</span>()
<span class="pl-s1">new_x</span>[..., <span class="pl-c1">1</span>] <span class="pl-c1">=</span> ((<span class="pl-s1">x</span>[..., <span class="pl-c1">1</span>] <span class="pl-c1">/</span> <span class="pl-s1">x</span>[..., <span class="pl-c1">0</span>].<span class="pl-en">clamp</span>(<span class="pl-s1">min</span><span class="pl-c1">=</span><span class="pl-c1">1e-3</span>)) <span class="pl-c1">+</span> <span class="pl-s1">center_pol_azi</span>[..., <span class="pl-c1">0</span>]).<span class="pl-en">detach</span>()
<span class="pl-s1">new_x</span>[..., <span class="pl-c1">2</span>] <span class="pl-c1">=</span> ((<span class="pl-s1">x</span>[..., <span class="pl-c1">2</span>] <span class="pl-c1">/</span> <span class="pl-s1">x</span>[..., <span class="pl-c1">0</span>].<span class="pl-en">clamp</span>(<span class="pl-s1">min</span><span class="pl-c1">=</span><span class="pl-c1">1e-3</span>)) <span class="pl-c1">+</span> <span class="pl-s1">center_pol_azi</span>[..., <span class="pl-c1">1</span>]).<span class="pl-en">detach</span>()
<span class="pl-k">return</span> <span class="pl-s1">new_x</span></pre></div> | ||
test_torch.py: "ImportError: cannot import name 'skipIfMPS' from 'torch.testing._internal.common_utils'" | <p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/malfet/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/malfet">@malfet</a> I tried <a href="https://github.com/pytorch/pytorch/blob/a8d6afb511a69687bbb2b7e88a3cf67917e1697e/test/test_torch.py">test_torch.py from a8d6afb5 (2.5.1)</a>, and it works.<br>
<sub>I don't know why <code class="notranslate">collect_env.py</code> outputs <code class="notranslate">2.5.0a0</code>; I used <a href="https://github.com/pytorch/pytorch/releases/download/v2.5.1/pytorch-v2.5.1.tar.gz">the 2.5.1 tarball</a> to build.</sub></p> | ||
CTC compute-sanitizer error in `ctc_loss_backward_log_beta_gpu_kernel` | <p dir="auto">nope, send us the PR thanks</p> | ||
DISABLED test_comprehensive_nan_to_num_cuda_int64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1100 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_unsqueeze_copy_cuda_int64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
`torch._transformer_encoder_layer_forward` usage of SDPA attention instead of native_MHA | <p dir="auto">we dispatch to sdpa when it is using the fused version: </p><div class="Box Box--condensed my-2">
<div class="Box-header f6">
<p class="mb-0 text-bold">
<a href="https://github.com/pytorch/pytorch/blob/bf78a0fa968103cff9e2652316071d6cdb0e6296/aten/src/ATen/native/transformers/cuda/attention.cu#L572">pytorch/aten/src/ATen/native/transformers/cuda/attention.cu</a>
</p>
<p class="mb-0 color-fg-muted">
Line 572
in
<a data-pjax="true" class="commit-tease-sha Link--inTextBlock" href="/pytorch/pytorch/commit/bf78a0fa968103cff9e2652316071d6cdb0e6296">bf78a0f</a>
</p>
</div>
<div itemprop="text" class="Box-body p-0 blob-wrapper blob-wrapper-embedded data">
<table class="highlight tab-size mb-0 js-file-line-container" data-tab-size="8" data-paste-markdown-skip="">
<tbody><tr class="border-0">
<td id="L572" class="blob-num border-0 px-3 py-0 color-bg-default" data-line-number="572"></td>
<td id="LC572" class="blob-code border-0 px-3 py-0 color-bg-default blob-code-inner js-file-line"> <span class="pl-k">if</span> (!mask.<span class="pl-c1">has_value</span>() && no_seq_len_1_nested && </td>
</tr>
</tbody></table>
</div>
</div>
<p></p> | ||
[Feature request] Enabling padding-free training with FlexAttention | <p dir="auto">I might have had a brain fart indeed. I was still thinking of a one dimensional softmax (because in my mind, batch dimension disappeared) , but there will be still be a 2d softmax with one very long sequence - one for each token. So I think you're right. Will close this!</p> | ||
onnx.export(dynamo=True): input_names processing is broken when dynamic_axes and list inputs are used | <p dir="auto">Tested locally and succeed.</p> | ||
Make nvidia pypi dependencies optional? | <p dir="auto">I agree with eval-dev, currently installing cpu-only pytorch with tools like Poetry is a nightmare. Having CUDA dependencies as extras would be perfect. Tensorflow does that:<br>
<a href="https://www.tensorflow.org/install/pip" rel="nofollow">https://www.tensorflow.org/install/pip</a></p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="# CPU only:
python3 -m pip install tensorflow
# GPU support:
python3 -m pip install tensorflow[and-cuda] "><pre class="notranslate"><code class="notranslate"># CPU only:
python3 -m pip install tensorflow
# GPU support:
python3 -m pip install tensorflow[and-cuda]
</code></pre></div>
<p dir="auto">EDIT:<br>
Ok, I can see there is already a discussion about it: <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2635839084" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/139761" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/139761/hovercard?comment_id=2457836325&comment_type=issue_comment" href="https://github.com/pytorch/pytorch/issues/139761#issuecomment-2457836325">#139761 (comment)</a></p> | ||
[MPS] Add support for output_channels > 2**16 in `F.conv1d` | <p dir="auto">Fix incoming.</p>
<p dir="auto"><a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/pytorchbot/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/pytorchbot">@pytorchbot</a> label 'module: mps'</p> | ||
DISABLED test_backward_round_decimals_0_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_special_ndtri_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_backward_fmax_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
DISABLED test_compile_forward_argmax_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | <p dir="auto">Going to try closing and see if it happens again after <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2653207752" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140443" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140443/hovercard" href="https://github.com/pytorch/pytorch/pull/140443">#140443</a>.</p> | ||
Floating point exception (core dumped) in `torch._weight_norm` | <p dir="auto">Closing as duplicate of <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2352860747" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/128695" data-hovercard-type="issue" data-hovercard-url="/pytorch/pytorch/issues/128695/hovercard" href="https://github.com/pytorch/pytorch/issues/128695">#128695</a> (which was filed a while back by the same author)<br>
(And reopening <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2359924859" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/128958" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/128958/hovercard" href="https://github.com/pytorch/pytorch/pull/128958">#128958</a> )</p> | ||
DISABLED test_comprehensive_empty_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 400 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_narrow_copy_cuda_bool (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 400 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_cumprod_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
The test case test/export/test_export.py::TestExport.test_slice_with_floordiv failed but not exposed in trunk CI. | <p dir="auto">The test case has been set expected failure.</p> | ||
[FR] Will Pytorch support GemmGrouped as built-in operator? | <p dir="auto">cc <a class="user-mention notranslate" data-hovercard-type="user" data-hovercard-url="/users/eellison/hovercard" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/eellison">@eellison</a> who I know was investigating ways to codgen this in inductor, but I think there is a valid argument for a top level grouped_gemm func in PyTorch</p> | ||
DISABLED test_comprehensive_linalg_lu_factor_ex_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1050 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_minimum_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_mse_loss_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_slice_index_changing_sign_cuda (__main__.TestInductorDynamicCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 400 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_all_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_special_log_ndtr_cuda_float64 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 350 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_nn_functional_poisson_nll_loss_cuda_int32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_norm_inf_cuda_float32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1200 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_mm_concat_cpu (__main__.FreezingCpuTests) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 3100 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
torch.cuda.memory_usage() returns percent of time over the past sample period global memory being read/written for Nvidia and MegaBytes for AMD | <p dir="auto">Great, looks like we're thinking about the same thing :) <a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="2658020979" data-permission-text="Title is private" data-url="https://github.com/pytorch/pytorch/issues/140685" data-hovercard-type="pull_request" data-hovercard-url="/pytorch/pytorch/pull/140685/hovercard" href="https://github.com/pytorch/pytorch/pull/140685">#140685</a> Let's go with this PR. I agree on the memory utilization (i actually changed the API to be memory utilization in my PR but yes it's BC-breaking so let's probably go with this.</p> | ||
test_ops.py fails when GPU is present | <p dir="auto">oh yeah, just fix the test</p> | ||
[DomainsOnly] Jobs fail with GLIBC version not found | <blockquote>
<p dir="auto">We're also seeing it in our FBGEMM CI jobs - <a href="https://github.com/pytorch/FBGEMM/actions/runs/11843064011/job/33003165717#step:5:28">https://github.com/pytorch/FBGEMM/actions/runs/11843064011/job/33003165717#step:5:28</a></p>
</blockquote>
<p dir="auto">In the FBGEMM CI, the base image is <code class="notranslate">pytorch/manylinux-builder:rocm6.1</code>. It has GLIBC 2.17 and does not satisfy GLIBC_2.27. Moreover, the guest OS of <code class="notranslate">pytorch/manylinux-builder:rocm6.1</code> is CentOS 7, is it too old and could be a potential problem in the future?</p> | ||
[AOTI] AOT Compile NaViT - AttributeError: 'int' object has no attribute 'node' | <p dir="auto">Yup</p>
<div class="snippet-clipboard-content notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="diff --git a/torch/_subclasses/fake_tensor.py b/torch/_subclasses/fake_tensor.py
index 985b274cf2b..bdc0bc9ef4b 100644
--- a/torch/_subclasses/fake_tensor.py
+++ b/torch/_subclasses/fake_tensor.py
@@ -1931,6 +1931,7 @@ class FakeTensorMode(TorchDispatchMode):
and len(flat_arg_fake_tensors) != 0
and not has_symbolic_sizes
and not avoiding_device_init
+ and False
):
const_flat_args = [
a.constant if self.is_our_fake(a) else a for a in flat_args"><pre class="notranslate"><code class="notranslate">diff --git a/torch/_subclasses/fake_tensor.py b/torch/_subclasses/fake_tensor.py
index 985b274cf2b..bdc0bc9ef4b 100644
--- a/torch/_subclasses/fake_tensor.py
+++ b/torch/_subclasses/fake_tensor.py
@@ -1931,6 +1931,7 @@ class FakeTensorMode(TorchDispatchMode):
and len(flat_arg_fake_tensors) != 0
and not has_symbolic_sizes
and not avoiding_device_init
+ and False
):
const_flat_args = [
a.constant if self.is_our_fake(a) else a for a in flat_args
</code></pre></div>
<p dir="auto">this makes the test pass</p> | ||
DISABLED test_comprehensive_zero__cuda_bool (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> | ||
DISABLED test_comprehensive_diagonal_scatter_cuda_int32 (__main__.TestInductorOpInfoCUDA) | <p dir="auto">Resolving the issue because the test is not flaky anymore after 1150 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive</p> |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 3