docs: support GlobalTensor pipe entry~#602
Conversation
There was a problem hiding this comment.
Code Review
This pull request updates the PTO IR manual and design documentation to support global entry in pipe communication, allowing for GlobalTensor-like GM view descriptors alongside existing local tile buffers. It introduces new talloc operations for producer-side allocation and modifies initialization procedures for global-only GM FIFO pipes to omit local consumer buffers. Feedback was provided to ensure consistency in the documentation regarding the binding of talloc operations.
| `pto.aic_initialize_pipe` / `pto.aiv_initialize_pipe` with the matching | ||
| `pto.tpush_*` / `pto.tpop_*` / `pto.tfree_*` ops in the same function. | ||
| - `slot_size` is expressed in bytes and uses the pre-split logical tile size. | ||
| `pto.talloc_*` / `pto.tpush_*` / `pto.tpop_*` / `pto.tfree_*` ops in the same |
Codex Review该评论由 review 机器人自动更新。
Summary检查到 3 个问题:1 个会直接让 EmitC 输出代码编译失败,2 个会破坏 GlobalTensor pipe 的跨函数配对/协议一致性。 Findings
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 23447a4d8b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| id = std::to_string(idAttr.getInt()); | ||
| else | ||
| id = std::to_string(reinterpret_cast<uintptr_t>(info.op)); | ||
| return PipePeerKey{"__pto_globaltensor_pipe", "id_" + id, info.dirMask}; |
There was a problem hiding this comment.
Include function scope in global-only pipe grouping key
getGlobalTensorPipeKey collapses every global-only pipe with the same frontend id and dirMask into one module-wide component, because it uses a constant owner ("__pto_globaltensor_pipe"). In collectPeerAwareInit this causes unrelated kernels that reuse the default id = 0 to be merged, and later buildPeerAwareComponents enforces a single shared signature/flag_base for all of them. A valid module with multiple independent pipe pairs can then fail with a false "peer pipe init ops must agree" error (or get unintended shared flag allocation) even though IDs are only required to be unique per function.
Useful? React with 👍 / 👎.
A5 板测成功
|
A3 板测完成(有跳过)
|
No description provided.