feat: support range download for IM large files#283
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors IM resource download to support probe + chunked HTTP Range downloads with retries and commit-based saves via a new FileIO BeginSave/SaveFile API; adds Content-Range parsing, per-chunk write/validation, and multiple unit/integration tests and docs updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Runtime as Runtime/Caller
participant HTTP as Origin HTTP Server
participant FileIO as FileIO (BeginSave/SaveFile)
Runtime->>HTTP: GET with Range: bytes=0-(probeChunkSize-1)
HTTP-->>Runtime: 206 Partial Content (Content-Range, probe bytes)
Runtime->>FileIO: BeginSave(outputPath)
FileIO-->>Runtime: SaveFile handle
Runtime->>FileIO: WriteAt(offset=0, probe bytes)
loop for each chunk
Runtime->>HTTP: GET with Range: bytes=start-end
HTTP-->>Runtime: 206 Partial Content (chunk bytes)
Runtime->>FileIO: WriteAt(offset=start, chunk bytes)
end
Runtime->>FileIO: Stat() -> verify size == parsed total
Runtime->>FileIO: Commit()
FileIO-->>Runtime: SaveResult (size, resolved path)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Greptile SummaryThis PR adds chunked HTTP Range download support for large IM resource files: a probe request fetches the first 128 KB and extracts total file size from Confidence Score: 4/5Safe to merge; the range download logic is well-structured and major concerns from prior threads appear addressed. Two P2 findings remain that are worth addressing but are not blocking. No P0/P1 issues found. The two P2 findings describe defensive correctness improvements rather than definite runtime failures on a well-behaved server. Prior P0/P1 thread concerns (dead return, no-op semaphore, probe byte count, stale file on Rename failure) appear resolved in the current code. shortcuts/im/im_messages_resources_download.go — rangeChunkReader.Read() around lines 185–244 Important Files Changed
Sequence DiagramsequenceDiagram
participant CLI as CLI (Execute)
participant DL as downloadIMResourceToPath
participant CR as rangeChunkReader
participant API as Lark API
CLI->>DL: downloadIMResourceToPath(messageID, fileKey, "file", path)
DL->>API: GET /resources/:key Range: bytes=0-131071
alt Server supports range (206)
API-->>DL: 206 Partial Content + Content-Range: bytes 0-N/totalSize
DL->>CR: newRangeChunkReader(probeBody, totalSize)
DL->>DL: Save(path, CR)
loop each 8 MB chunk (nextOffset < totalSize)
CR->>API: GET /resources/:key Range: bytes=X-Y (with retry/backoff)
API-->>CR: 206 Partial Content
CR-->>DL: stream bytes
end
else Server does not support range (200)
API-->>DL: 200 OK + full body
DL->>DL: Save(path, body)
end
DL->>DL: assert result.Size() == totalSize
DL-->>CLI: (savedPath, sizeBytes)
Reviews (7): Last reviewed commit: "(im) support im oapi range download larg..." | Re-trigger Greptile |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@shortcuts/im/im_messages_resources_download.go`:
- Around line 272-284: The retry loop can start another DoAPIStream call after
sleepIMDownloadRetry observes cancellation; update the loops that call
runtime.DoAPIStream with imDownloadRequestRetries to check ctx.Err() immediately
after sleepIMDownloadRetry and return nil, ctx.Err() if canceled so no extra
attempt runs; specifically modify the loops that use runtime.DoAPIStream,
imDownloadRequestRetries and sleepIMDownloadRetry (also the similar loop at the
later block around lines 289–297) to return on ctx.Err() before the next
iteration.
- Around line 189-215: The 206 handling currently only calls parseTotalSize and
doesn't verify the Content-Range start/end or check the number of bytes actually
written; update the logic that handles http.StatusPartialContent (and the
similar blocks referenced by downloadAndWriteChunk and the other 206 handlers)
to parse start, end, and total from downloadResp.Header.Get("Content-Range")
(not just total), verify the returned start/end exactly match the requested task
range, and ensure the number of bytes written by writeChunkAt equals
(end-start+1) before accepting the chunk; if mismatched, return a network/error
immediately. Modify or overload writeChunkAt (or the call sites) so it can
validate and return the written length for this check, and keep using
buildChunkTasks, sem, and downloadAndWriteChunk to locate locations to change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 74c0c338-e7d6-4dc7-85c6-bcb98affb66a
📒 Files selected for processing (4)
shortcuts/common/runner.goshortcuts/im/helpers_network_test.goshortcuts/im/helpers_test.goshortcuts/im/im_messages_resources_download.go
5faf274 to
ea10aa6
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (3)
shortcuts/im/im_messages_resources_download.go (3)
369-393:⚠️ Potential issue | 🟠 MajorChunk download validates written bytes but not the response range.
downloadAndWriteChunkcorrectly validates thatwritten == expected(line 390-392), which catches short reads. However, similar to the probe, it doesn't validate that theContent-Rangeheader in the 206 response matches the requestedstart-end. If the server returns a different range, the data would be written at the wrong offset.🛠️ Suggested validation
if downloadResp.StatusCode != http.StatusPartialContent { return output.ErrNetwork("unexpected status code: %d", downloadResp.StatusCode) } + +// Validate Content-Range matches requested range +respStart, respEnd, _, err := parseContentRange(downloadResp.Header.Get("Content-Range")) +if err != nil { + return output.ErrNetwork("invalid Content-Range: %s", err) +} +if respStart != start || respEnd != end { + return output.ErrNetwork("response range %d-%d does not match requested %d-%d", respStart, respEnd, start, end) +} written, err := writeChunkAt(file, downloadResp.Body, start)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/im_messages_resources_download.go` around lines 369 - 393, In downloadAndWriteChunk, validate the HTTP 206 response's Content-Range header before writing: parse downloadResp.Header.Get("Content-Range") (in function downloadAndWriteChunk) to ensure the returned range start and end match the requested start and end and that the reported length equals end-start+1; if parsing fails or the range doesn't match, return an error (similar to downloadResponseError/output.ErrNetwork) and avoid calling writeChunkAt. This prevents writing data at the wrong offset when the server returns a different range.
272-284:⚠️ Potential issue | 🟡 MinorReturn immediately once backoff observes cancellation.
After
sleepIMDownloadRetry()returns due toctx.Done(), the loop still proceeds to the next iteration and callsDoAPIStreamagain. Add a context check after the sleep to avoid the extra attempt.🔧 Proposed fix
sleepIMDownloadRetry(ctx, attempt) + if ctx.Err() != nil { + return nil, ctx.Err() + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/im_messages_resources_download.go` around lines 272 - 284, The loop in the download retry logic (using imDownloadRequestRetries, runtime.DoAPIStream and sleepIMDownloadRetry) can perform an extra request after sleep if the context was canceled; add a context cancellation check immediately after sleepIMDownloadRetry(ctx, attempt) and return nil, ctx.Err() (or propagate ctx.Err()) when ctx.Err() != nil to avoid calling runtime.DoAPIStream again with a canceled context.
189-216:⚠️ Potential issue | 🟠 MajorValidate the full
Content-Rangebefore accepting a partial response.The probe chunk handling at line 194 writes the response body to offset 0 without validating that the server actually returned bytes
0-N. If the server returns a misaligned range (e.g.,bytes 1000-2000/total), the data will be written at offset 0, corrupting the file. TheparseTotalSize()function only extracts the total, not the start/end values.Consider parsing and validating the full
Content-Rangeheader to ensure the returned range matches the requested range before writing.🛠️ Suggested approach
Create a
parseContentRangefunction that returnsstart,end, andtotal, then validate:start, end, total, err := parseContentRange(downloadResp.Header.Get("Content-Range")) if err != nil { return "", 0, output.ErrNetwork("invalid Content-Range header: %s", err) } if start != 0 || end > probeChunkSize-1 { return "", 0, output.ErrNetwork("unexpected range in response: %d-%d", start, end) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/im_messages_resources_download.go` around lines 189 - 216, The probe chunk handling currently calls parseTotalSize and writes downloadResp.Body at offset 0 via writeChunkAt without validating the returned byte range; change this to parse the full Content-Range (create/use a parseContentRange that returns start, end, total) and validate that start == 0 and end is within the expected probe chunk size before calling writeChunkAt on tmpFile; if validation fails, return an output.ErrNetwork with a clear "unexpected range" message, otherwise use the parsed total for sizeBytes and proceed to buildChunkTasks/downloadAndWriteChunk as before.
🧹 Nitpick comments (1)
shortcuts/im/helpers_network_test.go (1)
321-354: Retry tests incur real wall-clock delays (~900ms total).The retry logic uses
sleepIMDownloadRetrywhich is not mocked, causing these tests to wait through actual exponential backoff delays (300ms + 600ms for 2 failed attempts). While this tests the real behavior, it slows down the test suite.Consider extracting the sleep function as a package-level variable that can be overridden in tests, or accept the delay as acceptable for integration-style testing.
💡 Optional: Make sleep mockable
In the implementation file:
var sleepFunc = sleepIMDownloadRetry func sleepIMDownloadRetry(ctx context.Context, attempt int) { // ... existing implementation }In tests:
func init() { sleepFunc = func(ctx context.Context, attempt int) {} // no-op for tests }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/helpers_network_test.go` around lines 321 - 354, The tests are incurring real backoff delays because downloadIMResourceToPath calls sleepIMDownloadRetry directly; refactor by introducing a package-level variable (e.g., sleepFunc) initialized to sleepIMDownloadRetry and change downloadIMResourceToPath (and any callers) to call sleepFunc(ctx, attempt) instead of sleepIMDownloadRetry; in tests override sleepFunc to a no-op (and optionally restore it) so retries run instantly. Ensure you reference and update the symbols sleepIMDownloadRetry, sleepFunc, and downloadIMResourceToPath so the test can set sleepFunc = func(ctx context.Context, attempt int) {} to avoid real sleeps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@shortcuts/im/im_messages_resources_download.go`:
- Around line 369-393: In downloadAndWriteChunk, validate the HTTP 206
response's Content-Range header before writing: parse
downloadResp.Header.Get("Content-Range") (in function downloadAndWriteChunk) to
ensure the returned range start and end match the requested start and end and
that the reported length equals end-start+1; if parsing fails or the range
doesn't match, return an error (similar to
downloadResponseError/output.ErrNetwork) and avoid calling writeChunkAt. This
prevents writing data at the wrong offset when the server returns a different
range.
- Around line 272-284: The loop in the download retry logic (using
imDownloadRequestRetries, runtime.DoAPIStream and sleepIMDownloadRetry) can
perform an extra request after sleep if the context was canceled; add a context
cancellation check immediately after sleepIMDownloadRetry(ctx, attempt) and
return nil, ctx.Err() (or propagate ctx.Err()) when ctx.Err() != nil to avoid
calling runtime.DoAPIStream again with a canceled context.
- Around line 189-216: The probe chunk handling currently calls parseTotalSize
and writes downloadResp.Body at offset 0 via writeChunkAt without validating the
returned byte range; change this to parse the full Content-Range (create/use a
parseContentRange that returns start, end, total) and validate that start == 0
and end is within the expected probe chunk size before calling writeChunkAt on
tmpFile; if validation fails, return an output.ErrNetwork with a clear
"unexpected range" message, otherwise use the parsed total for sizeBytes and
proceed to buildChunkTasks/downloadAndWriteChunk as before.
---
Nitpick comments:
In `@shortcuts/im/helpers_network_test.go`:
- Around line 321-354: The tests are incurring real backoff delays because
downloadIMResourceToPath calls sleepIMDownloadRetry directly; refactor by
introducing a package-level variable (e.g., sleepFunc) initialized to
sleepIMDownloadRetry and change downloadIMResourceToPath (and any callers) to
call sleepFunc(ctx, attempt) instead of sleepIMDownloadRetry; in tests override
sleepFunc to a no-op (and optionally restore it) so retries run instantly.
Ensure you reference and update the symbols sleepIMDownloadRetry, sleepFunc, and
downloadIMResourceToPath so the test can set sleepFunc = func(ctx
context.Context, attempt int) {} to avoid real sleeps.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 04629503-8d6f-410e-8ee4-e540a022fef7
📒 Files selected for processing (6)
shortcuts/common/runner.goshortcuts/im/helpers_network_test.goshortcuts/im/helpers_test.goshortcuts/im/im_messages_resources_download.goskills/lark-im/SKILL.mdskills/lark-im/references/lark-im-messages-resources-download.md
✅ Files skipped from review due to trivial changes (3)
- shortcuts/common/runner.go
- skills/lark-im/references/lark-im-messages-resources-download.md
- shortcuts/im/helpers_test.go
ea10aa6 to
baa8261
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
shortcuts/im/helpers_test.go (1)
492-527: Well-structured test with good coverage.The table-driven test comprehensively covers the
parseTotalSizefunction, including success cases and all major error paths. The substring matching for error assertions is appropriate for maintainability.One minor gap: the implementation checks for
totalSize < 0(line 355-356 inim_messages_resources_download.go), but there's no test case for a negative total like"bytes 0-15/-100". Consider adding this edge case for completeness.🧪 Optional: Add test case for negative total
{name: "unknown total size", contentRange: "bytes 0-99/*", wantErr: `unknown total size in content-range: "bytes 0-99/*"`}, {name: "invalid total", contentRange: "bytes 0-15/not-a-number", wantErr: "parse total size:"}, + {name: "negative total", contentRange: "bytes 0-15/-100", wantErr: "invalid total size:"}, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/helpers_test.go` around lines 492 - 527, Add a table-driven test case to TestParseTotalSize that covers the negative-total branch in parseTotalSize: insert a test entry like {name: "negative total", contentRange: "bytes 0-15/-100", wantErr: "total size"} so the test calls parseTotalSize and asserts the returned error contains the "total size" substring (matching the implementation's totalSize < 0 check in im_messages_resources_download.go); this ensures the negative total-size path in parseTotalSize is exercised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@shortcuts/im/helpers_test.go`:
- Around line 492-527: Add a table-driven test case to TestParseTotalSize that
covers the negative-total branch in parseTotalSize: insert a test entry like
{name: "negative total", contentRange: "bytes 0-15/-100", wantErr: "total size"}
so the test calls parseTotalSize and asserts the returned error contains the
"total size" substring (matching the implementation's totalSize < 0 check in
im_messages_resources_download.go); this ensures the negative total-size path in
parseTotalSize is exercised.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 028b515c-51cf-41b2-a7ef-7fb09176f88f
📒 Files selected for processing (6)
shortcuts/common/runner.goshortcuts/im/helpers_network_test.goshortcuts/im/helpers_test.goshortcuts/im/im_messages_resources_download.goskills/lark-im/SKILL.mdskills/lark-im/references/lark-im-messages-resources-download.md
✅ Files skipped from review due to trivial changes (1)
- skills/lark-im/references/lark-im-messages-resources-download.md
🚧 Files skipped from review as they are similar to previous changes (3)
- skills/lark-im/SKILL.md
- shortcuts/im/im_messages_resources_download.go
- shortcuts/common/runner.go
baa8261 to
9a88875
Compare
9a88875 to
ab54242
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (3)
shortcuts/im/im_messages_resources_download.go (2)
274-289: Consider adding ctx.Err() check after sleep to avoid one extra API call.After
sleepIMDownloadRetry, the loop continues to the nextDoAPIStreamcall. If the context was canceled during the sleep, one additional (failing) API call is made before returning. This is a minor inefficiency.♻️ Suggested improvement
sleepIMDownloadRetry(ctx, attempt) + if err := ctx.Err(); err != nil { + return nil, err + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/im_messages_resources_download.go` around lines 274 - 289, The loop calling runtime.DoAPIStream (in the block that iterates up to imDownloadRequestRetries and calls sleepIMDownloadRetry) should check ctx.Err() immediately after sleepIMDownloadRetry and before making the next DoAPIStream call; update the retry loop to return ctx.Err() if the context was canceled after sleeping (use the existing ctx variable and the same return pattern used elsewhere) to avoid issuing one extra API call when the context is canceled.
196-198: Probe chunk write doesn't validate byte count.The probe chunk written at offset 0 doesn't verify that the written bytes match the expected probe size or the
Content-Rangereturned by the server. While the final file size check (line 216-218) catches size mismatches, validating each chunk individually provides earlier detection of corrupted responses.♻️ Suggested improvement
- if _, err := writeChunkAt(saveFile, downloadResp.Body, 0); err != nil { + probeWritten, err := writeChunkAt(saveFile, downloadResp.Body, 0) + if err != nil { return "", 0, err } + expectedProbe := min(probeChunkSize, totalSize) + if probeWritten != expectedProbe { + return "", 0, output.ErrNetwork("probe chunk size mismatch: expected %d, got %d", expectedProbe, probeWritten) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/im/im_messages_resources_download.go` around lines 196 - 198, The probe chunk write using writeChunkAt(saveFile, downloadResp.Body, 0) must validate the number of bytes written against the expected probe size or the server's Content-Range/Content-Length; update the code around that call to capture the returned written count and error (e.g., n, err := writeChunkAt(...)), return any error, and if n does not equal the expected probe size (or the size parsed from downloadResp.Header.Get("Content-Range")/Content-Length), return a size-mismatch error so probe failures are detected immediately (refer to writeChunkAt, downloadResp, saveFile and the Content-Range/Content-Length headers).shortcuts/common/runner_jq_test.go (1)
118-124: Consider returning valid stub implementations instead of nil.
testSaveFile.Stat()andtestSaveFile.Commit()returnnilforFileInfoandSaveResultrespectively. While this works for the current test (TestRuntimeContext_FileIO_UsesExecutionContextdoesn't exercise these methods), callers expecting valid interface implementations would panic onnil.Size().For future-proofing, consider returning minimal stub structs:
♻️ Suggested improvement
+type testFileInfo struct{} +func (testFileInfo) Size() int64 { return 0 } +func (testFileInfo) IsDir() bool { return false } +func (testFileInfo) Mode() fs.FileMode { return 0 } + +type testSaveResult struct{} +func (testSaveResult) Size() int64 { return 0 } + type testSaveFile struct{} func (testSaveFile) Write(p []byte) (int, error) { return len(p), nil } func (testSaveFile) WriteAt(p []byte, _ int64) (int, error) { return len(p), nil } -func (testSaveFile) Stat() (fileio.FileInfo, error) { return nil, nil } -func (testSaveFile) Commit() (fileio.SaveResult, error) { return nil, nil } +func (testSaveFile) Stat() (fileio.FileInfo, error) { return testFileInfo{}, nil } +func (testSaveFile) Commit() (fileio.SaveResult, error) { return testSaveResult{}, nil } func (testSaveFile) Abort() error { return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shortcuts/common/runner_jq_test.go` around lines 118 - 124, testSaveFile.Stat() and testSaveFile.Commit() currently return nil which can cause panics if callers call methods like Size(); implement and return minimal stub implementations instead: create a small stub type that implements fileio.FileInfo (e.g., stubFileInfo with methods Name, Size, Mode, ModTime, IsDir, Sys) and a stub type for fileio.SaveResult (e.g., stubSaveResult with whatever methods SaveResult requires), then have testSaveFile.Stat() return &stubFileInfo{size:0,...} and testSaveFile.Commit() return &stubSaveResult{/* defaults */}, updating the test types referenced (testSaveFile.Stat and testSaveFile.Commit) to return those instances rather than nil.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@shortcuts/common/runner_jq_test.go`:
- Around line 118-124: testSaveFile.Stat() and testSaveFile.Commit() currently
return nil which can cause panics if callers call methods like Size(); implement
and return minimal stub implementations instead: create a small stub type that
implements fileio.FileInfo (e.g., stubFileInfo with methods Name, Size, Mode,
ModTime, IsDir, Sys) and a stub type for fileio.SaveResult (e.g., stubSaveResult
with whatever methods SaveResult requires), then have testSaveFile.Stat() return
&stubFileInfo{size:0,...} and testSaveFile.Commit() return &stubSaveResult{/*
defaults */}, updating the test types referenced (testSaveFile.Stat and
testSaveFile.Commit) to return those instances rather than nil.
In `@shortcuts/im/im_messages_resources_download.go`:
- Around line 274-289: The loop calling runtime.DoAPIStream (in the block that
iterates up to imDownloadRequestRetries and calls sleepIMDownloadRetry) should
check ctx.Err() immediately after sleepIMDownloadRetry and before making the
next DoAPIStream call; update the retry loop to return ctx.Err() if the context
was canceled after sleeping (use the existing ctx variable and the same return
pattern used elsewhere) to avoid issuing one extra API call when the context is
canceled.
- Around line 196-198: The probe chunk write using writeChunkAt(saveFile,
downloadResp.Body, 0) must validate the number of bytes written against the
expected probe size or the server's Content-Range/Content-Length; update the
code around that call to capture the returned written count and error (e.g., n,
err := writeChunkAt(...)), return any error, and if n does not equal the
expected probe size (or the size parsed from
downloadResp.Header.Get("Content-Range")/Content-Length), return a size-mismatch
error so probe failures are detected immediately (refer to writeChunkAt,
downloadResp, saveFile and the Content-Range/Content-Length headers).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e0f5b4b1-a3e6-45ad-8d76-fb3d449fb2fa
📒 Files selected for processing (6)
extension/fileio/types.gointernal/vfs/localfileio/localfileio.gointernal/vfs/localfileio/localfileio_test.goshortcuts/common/runner_jq_test.goshortcuts/im/helpers_network_test.goshortcuts/im/im_messages_resources_download.go
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/vfs/localfileio/localfileio_test.go
ab54242 to
ae2ec29
Compare
Change-Id: I38e6f6f9cf8b8711dc40650d19c77503f4e44989 feat: file io support write io Change-Id: Ic6ddb8ce5a173ce14198061d8dcc77194dee7a6b Revert "feat: file io support write io" This reverts commit ab54242.
ae2ec29 to
8983788
Compare
🚀 PR Preview Install Guide🧰 CLI updatenpm i -g https://pkg.pr.new/larksuite/cli/@larksuite/cli@89837883f2198262e776c54a3e9715c4deb8e7b5🧩 Skill updatenpx skills add chenxingtong-bytedance/cli#feat/support_download_large_file -y -g |
Change-Id: I38e6f6f9cf8b8711dc40650d19c77503f4e44989
Summary
Add range download support for IM OAPI resources so lark-cli can reliably download large files. This improves stability for large payloads and network interruptions.
Changes
Test Plan
lark im message download ...下载小文件正常Related Issues
None
Summary by CodeRabbit
New Features
Bug Fixes / Reliability
Documentation
Tests