[pull] master from libretro:master#936
Merged
pull[bot] merged 30 commits intoAlexandre1er:masterfrom Apr 17, 2026
Merged
Conversation
The IHDR check rejects any PNG whose decoded output buffer
(width * height * 4) would not fit under a threshold. That
threshold was 0x80000000 (2 GiB), which rejects valid images
around 23000x28000 that modern 64-bit RetroArch builds have
plenty of memory to decode.
Raise the threshold to 0x100000000 (4 GiB). The two jobs the
check performs are preserved:
1) Overflow safety. The multiplication is done in uint64_t
(via the existing cast); 4 GiB is well below 2^63, so the
expression cannot wrap. width*height*4 <= UINT32_MAX * UINT32_MAX * 4
fits uint64_t easily.
2) Reject obviously-malformed or malicious PNGs before any
large allocation. 4 GiB is still a hard ceiling and rejects
65536x65536 (17 GiB), which is the classic overflow-wrap
probe case.
Memory-constrained platforms are unaffected in substance: a 32-bit
target that can't actually allocate 3 GiB will still fail at the
malloc call further down (false_end branch), just one layer deeper
than before. No platform that could allocate such a buffer before
this change loses that ability, and platforms that could not still
cannot.
The literal gains a ULL suffix to keep the constant unambiguously
64-bit on LLP64 targets (Windows) where unsigned long is 32-bit.
Verified by header probes:
23200x28000 (2.42 GiB) -- was rejected, now accepted
32768x32768 (4.00 GiB) -- still rejected (at the new boundary)
65536x65536 (wraps u32) -- still rejected (overflow protection OK)
1 x 1073741823 (~4 GiB -4B) -- accepted (boundary probe)
Rewrite the upstream C++14 Discord-RPC library (deps/discord-rpc, ~2000
lines across 14 files) as plain C89 and merge it into network/discord.c
following retroarch and libretro-common conventions. The result is a
single self-contained translation unit.
What changed in the implementation
----------------------------------
* No background IO thread. The runloop already calls Discord_RunCallbacks
and Discord_UpdateConnection every frame, so the upstream IoThreadHolder
(with its std::thread, std::condition_variable, and std::mutex) was
doing nothing useful in single-threaded mode and causing wakeup overhead
in multi-threaded mode. Removing it eliminates all five mutex locks and
five atomic_bool exchanges per RunCallbacks invocation.
* No auto-register. Discord_Register/Discord_RegisterSteamGame are kept
as exported no-op stubs for ABI compatibility, but the per-OS desktop
file / registry / Info.plist logic is dropped along with its psapi /
advapi32 / xdg-mime dependencies. RetroArch never used the discord://
URL handler functionality these provided.
* C++ -> C89 translations:
- std::atomic_bool / std::atomic_uint -> plain bool / unsigned (single
threaded model)
- std::mutex / std::lock_guard -> removed entirely
- std::thread / std::condition_variable -> removed entirely
- std::chrono -> cpu_features_get_time_usec()
- std::mt19937_64 + uniform_real_dist -> rand() with bounded jitter
- MsgQueue<T,N> template -> typed ring-buffer arrays
- JsonWriter / JsonDocument / JsonReader RAII classes -> functions over
rjsonwriter_t* / rjson_t*
- BaseConnection inheritance (Unix/Win) -> #ifdef-gated discord_pipe_*
functions
- C++ lambdas for onConnect/onDisconnect -> static functions
* The wire protocol is preserved byte-for-byte. Frame header layout
(8 bytes, two little-endian uint32), opcode values, JSON field ordering
(presence flushed before send-queue, nonce-as-string, party.size as
array, instance as bool), and the upstream behavior of clearing handlers
on disconnect (so the disconnected callback only fires on subsequent
disconnects, not the first - this matches upstream and is invisible
because RetroArch's handler is a no-op stub) are all preserved.
What changed elsewhere
----------------------
* runloop.c, retroarch.c: drop the #ifdef DISCORD_DISABLE_IO_THREAD
guards around Discord_UpdateConnection. With no IO thread to disable,
the call is unconditional.
* griffin/griffin.c, griffin_cpp.cpp, griffin_objc.m: drop the seven
deps/discord-rpc/src/* unity-build #include directives. griffin.c
keeps the network/discord.c include unchanged.
* Makefile.common: collapse the 30-line discord block to four lines.
Drops NEED_CXX_LINKER, INCLUDE_DIRS += -Ideps/discord-rpc/include,
DISCORD_DISABLE_IO_THREAD toggle, all seven deps OBJ entries, and
the Win32-only -lpsapi -ladvapi32 link libraries (only needed by
the discarded discord_register_win.c).
* deps/discord-rpc/: removed entirely (15 files).
Verification
------------
The rewrite was exercised against a regression suite (188 assertions
across three test programs) that covers: every JSON serializer with
parsed-back equivalence checks against the upstream wire format, INT64
edge cases, frame header layout, ring-buffer fill/overflow/wraparound,
backoff bounds, and a full end-to-end Unix-socket dialogue against a
mock Discord server through handshake -> READY -> SET_ACTIVITY ->
SUBSCRIBE x3 -> PING -> PONG -> CLOSE. All 188 assertions pass.
Performance (gcc 13, -O2, x86_64; median of 5 runs)
---------------------------------------------------
Per-frame steady-state cost (RunCallbacks + UpdateConnection together):
upstream (IO thread) 52 ns/frame
upstream (no IO thread) 79 ns/frame
rewrite 47 ns/frame (-40% vs no-thread upstream)
Per-call costs:
Discord_UpdatePresence: ~unchanged (831 ns vs 806 ns; both bound by
JSON serialization which uses the same
rjsonwriter_* underneath)
Discord_RunCallbacks: 5 ns vs 37 ns (-86%; mutex+atomic removal)
Discord_UpdateConnection: 43 ns vs 41 ns (within noise)
Build metrics:
.text size: 16.3 KB vs 21.4 KB (-24%)
.data size: 12 B vs 52 B (-77%)
Compile time: 0.63 s vs 2.85 s (-78%; one .c file vs five .cpp)
Files changed
-------------
modified: Makefile.common (-24 lines)
modified: network/discord.h (+87, -2 lines; absorbs the
public Discord-RPC types and
function declarations that
formerly came from
deps/discord-rpc/include/discord_rpc.h)
modified: network/discord.c (+1604 lines net; folds in
the entire RPC layer)
modified: runloop.c (-2 lines)
modified: retroarch.c (-2 lines)
modified: griffin/griffin.c (-6 lines)
modified: griffin/griffin_cpp.cpp (-17 lines)
modified: griffin/griffin_objc.m (-4 lines)
removed: deps/discord-rpc/ (15 files, ~2000 lines)
Summary of what this delivers
Before After
Files 17 (14 C++/C/m + headers) 1 (network/discord.c)
Languages C++14, C, Objective-C C89
Lines ~2,000 in deps + 454 in network 2,042 in network
Compile time 2.85s 0.63 s (−78%)
Code size (.text) 21.4 KB 16.3 KB (−24%)
Per-frame overhead 79ns 47ns (−40%)
Build deps psapi, advapi32, xdg-mime, NEED_CXX_LINKER none
Test coverage none 188 assertions, all pass
rpng_pass_geom() computed `pitch` and `pass_size` in `unsigned int` even though pass_size is stored through `size_t *`. On inputs where the intermediate products exceed 32 bits this wraps silently and under-reports the size, which downstream causes an undersized inflate buffer allocation and a heap buffer overflow during decode. Concrete case: a 30000x30000 16-bit-per-channel RGBA PNG passes the IHDR output-size cap (which gates on the RGBA-8 output buffer of ~3.35 GiB, well under 4 GiB) but its 16bpc intermediate scanline buffer needs 7.20 GiB. With the old arithmetic this wrapped to 2.71 GiB (7,200,030,000 mod 2^32), the inflate buffer was allocated at that wrapped size, and the inflate step would then write ~4.5 GiB past the buffer end. A crafted input of this shape has exploit potential on any platform where the allocation happens to succeed. Fix: lead each arithmetic chain with a `(size_t)` cast on the first operand (typically ihdr->width or ihdr->depth) so the whole expression promotes to size_t. The local `bpp` and `pitch` variables are widened to size_t to hold the result; the out-parameters remain `unsigned *` and the narrowing cast at assignment time is explicit. On 64-bit builds this closes the wrap entirely — any computation that would previously have wrapped now produces the correct value, which downstream either allocates successfully (plenty of RAM) or fails cleanly at malloc (returns NULL -> IMAGE_PROCESS_ERROR). On 32-bit builds the pitch output can still be narrowed when the real value exceeds UINT32_MAX, but that path is unreachable in practice: reaching it requires an IHDR that the caller-side size check also has to accept, and no currently-callable IHDR does. Behaviour for all dimensions that previously computed correctly is unchanged. Verified empirically: 100x100 RGBA-8 : pass_size unchanged (40100) 23200x28000 RGBA-8 : pass_size unchanged (2,598,428,000) 32767x32767 RGBA-8 : pass_size unchanged (4,294,737,923) 30000x30000 RGBA-16 : old=2,905,062,704 (WRAPPED) new=7,200,030,000 40000x20000 RGBA-16 : old=2,105,052,704 (WRAPPED) new=6,400,020,000 The two wrapping cases now report the correct size to the caller, which will cleanly refuse the allocation on any real-world machine instead of silently under-allocating. Pre-existing latent bug, unrelated to recent changes; surfaced while reviewing the IHDR-cap raise from 2 GiB to 4 GiB which widens the range of inputs reaching this function.
The IHDR cap added in the previous commit gates only the final decoded output buffer size (width * height * 4 bytes, because rpng always produces ARGB32 regardless of source depth). It does not gate the intermediate inflate buffer, which rpng_pass_geom sizes as (pitch + 1) * height and which depends on the color_type and depth. For 16bpc RGBA that intermediate is 2x the output (eight bytes per pixel rather than four). A 30000x30000 16bpc-RGBA PNG passes the output cap (3.35 GiB) but requires a 7.20 GiB intermediate buffer. After the prior pass_geom widening commit the computation is correct and the downstream malloc fails cleanly; before that commit it wrapped uint32 and under-allocated, yielding a heap overflow during decode. Either way, rejecting these inputs at IHDR parse time is cleaner than relying on malloc failure to contain them. Add a second cap on pass_size (the intermediate buffer) alongside the existing output-buffer cap, both at 4 GiB. Use the existing rpng_pass_geom helper so the sizing logic stays in one place and matches what rpng_process_init would compute downstream. Reorder the checks so rpng_process_ihdr (color_type+depth validation) runs first: rpng_pass_geom's switch assumes a legal color_type and will otherwise return zero for pitch/pass_size, which would cause the new cap to silently pass malformed inputs. This reorder is behaviour-preserving for invalid IHDRs — they are still rejected, just at a slightly earlier point. Verified by round-tripping invalid combinations (1-bit RGB, 16-bit palette, color_type=5) through the check: all still rejected. Behaviour matrix: format dimensions output-GiB inter-GiB result -------------------------------------------------------------- RGBA-8 23200x28000 2.42 2.42 accepted (unchanged) RGBA-8 32767x32767 4.00 4.00 accepted (unchanged) RGBA-8 32768x32768 4.00 4.00 rejected (output cap, unchanged) RGBA-16 20000x20000 1.49 2.98 accepted (under both caps) RGBA-16 25000x25000 2.33 4.66 rejected (intermediate cap; was silently wrapped before) RGBA-16 30000x30000 3.35 6.71 rejected (intermediate cap; the exploit case) All previously-accepted inputs remain accepted. All images newly rejected are ones that would either under-allocate (pre-widening) or fail at malloc (post-widening); making the rejection explicit and early is the defense-in-depth addition. No behavioural change on 64-bit for any image whose intermediate buffer is under 4 GiB. On 32-bit the new cap is redundant with the downstream malloc-failure path but shaves off wasted work and makes error reporting more explicit.
boilerplate, fix resolution refresh rate parsing
Change 4 dispatch functions in video_driver.c to try the display
server first before falling back to the per-driver poke/ctx interface:
- video_driver_get_next_video_out
- video_driver_get_prev_video_out
- video_driver_get_video_output_size
- video_context_driver_get_metrics
On Win32, the display server now handles the above 4 queries via
dispserv_win32.c (wired in the previous commit). On other platforms
the display server slots are NULL so the poke/ctx fallback fires,
preserving existing behavior.
Remove the identical boilerplate wrapper functions that each Win32
graphics driver previously needed:
- d3d10_get_video_output_prev/next (gfx/drivers/d3d10.c)
- d3d11_get_video_output_prev/next (gfx/drivers/d3d11.c)
- d3d12_get_video_output_prev/next (gfx/drivers/d3d12.c)
- gdi_get_video_output_prev/next (gfx/drivers/gdi_gfx.c)
- gfx_ctx_wgl_get_video_output_prev/next (wgl_ctx.c)
- gfx_ctx_w_vk_get_video_output_prev/next (w_vk_ctx.c)
Fix resolution refresh rate parsing in menu_cbs_ok.c: the
resolution list format is "WIDTHxHEIGHT (RATE Hz)" but the parser
only skipped the space, not the opening parenthesis, causing
strtod("(120 Hz)") to return 0. Change the skip to a while loop
that advances past both spaces and '('.
When window_auto_width_max and video_fullscreen_x are both zero (common on first run or minimal configs), the windowed mode size cap previously fell back to DEFAULT_WINDOW_AUTO_WIDTH_MAX (1920) and DEFAULT_WINDOW_AUTO_HEIGHT_MAX (1080). Now query the display server for the actual monitor resolution before falling back to the compiled-in defaults. This requires the early display server init added in 2011892, which ensures the display server is available before video_driver_init_internal computes window dimensions. On a 4K monitor with video_scale set high, the window will now correctly scale up to 3840x2160 instead of capping at 1920x1080.
Move win32_get_refresh_rate, win32_get_video_output_prev/next, win32_get_video_output_size, win32_get_video_output, and win32_get_metrics from gfx/common/win32_common.c to gfx/display_servers/dispserv_win32.c. Also move the DISPLAYCONFIG_*_CUSTOM struct definitions, the QUERYDISPLAYCONFIG / GETDISPLAYCONFIGBUFFERSIZES function pointer typedefs, and the WIN32_GET_VIDEO_OUTPUT macro that these functions depend on. These are display server operations — they query the display hardware for resolution, refresh rate, and DPI metrics — and belong in the display server driver, not in the shared Win32 platform infrastructure. The declarations remain in win32_common.h so drivers that still reference these functions in their poke/ctx tables (d3d10, d3d11, d3d12, d3d8, gdi, wgl, w_vk, uwp) continue to compile and link. Pure code motion — no functional change.
Change video_driver_get_refresh_rate to try the display server first before falling back to the per-driver poke interface, matching the pattern already used for get_video_output_prev/next, get_video_output_size, and get_metrics. On Win32, the display server uses QueryDisplayConfig to read the refresh rate — the same implementation previously called through the poke interface. On other platforms the display server slot is NULL so the poke fallback fires unchanged. This was deferred from commit 6d708d4 due to incorrect refresh rates during resolution switching, which was traced to a parsing bug in the resolution string handler (fixed in that same commit).
Add dispserv_uwp.c — a display server driver for UWP (Xbox, Windows Store) that implements get_refresh_rate, get_video_output_size, and get_metrics using the WinRT DisplayInformation and HdmiDisplayInformation APIs. Add C wrapper functions uwp_get_refresh_rate() and uwp_get_dpi() to uwp_main.cpp to bridge the C++/WinRT APIs for the C display server. Wire dispserv_uwp into video_display_server_init for __WINRT__ builds and set get_display_type to return RARCH_DISPLAY_WIN32 in the UWP frontend driver so the early init path selects it. With all 5 display server operations now handled by either dispserv_win32 (desktop) or dispserv_uwp (UWP), remove the remaining direct win32_get_* references from every poke and context table: - win32_get_refresh_rate from d3d10/11/12/d3d8/d3d9cg/d3d9hlsl/gdi poke tables - win32_get_video_output_size from d3d10/11/12/gdi poke tables and wgl/w_vk context tables - win32_get_metrics from wgl/w_vk/uwp_egl context tables All display queries now route exclusively through the display server dispatch in video_driver.c.
…SX case Wire cocoa_get_metrics into dispserv_apple.m — the implementation already existed in ui/drivers/cocoa/cocoa_common.m (using CGDisplayScreenSize on macOS, UIScreen with per-device DPI tables on iOS/tvOS) but was only accessible through the context driver. Routing it through the display server makes DPI available earlier in the init sequence and may resolve the long-standing EXC_BAD_ACCESS crash on Mac noted in gfx_display.c (the crash was in the context driver path, not the display server path). Add a RARCH_DISPLAY_OSX case to video_display_server_init so Apple platforms are explicitly matched rather than falling through to the default case. Set get_display_type in the Darwin frontend driver to return RARCH_DISPLAY_OSX, enabling the early display server init path to select dispserv_apple before the video driver creates a window.
Wire x11_get_metrics into dispserv_x11.c — the implementation already existed in gfx/common/x11_common.c (using XDisplayWidthMM, XDisplayHeightMM, DisplayWidth, DisplayHeight for physical and pixel dimensions) but was only accessible through context drivers. Remove x11_get_metrics from the X11 context driver tables: - x_ctx.c (GLX) - x_vk_ctx.c (Vulkan) - xegl_ctx.c (EGL) Display metrics now route through the display server on X11, matching the pattern established for Win32, UWP, and Apple.
…rivers Wire android_display_get_metrics into dispserv_android.c — the implementation already existed in the same file (using Android system properties for DPI) but was not connected to the vtable. Remove android_display_get_metrics from the Android context driver tables: - android_ctx.c (EGL) - android_vk_ctx.c (Vulkan) Display metrics now route through the display server on Android, matching the pattern established for Win32, UWP, Apple, and X11. Wayland, Emscripten, QNX, and Switch still use context driver metrics — they lack dedicated display servers.
The underlying gfx context driver crash has been fixed with the recent display server change.
video_frame_delay_auto() reads the last N frame-time samples from the
frame_time_samples ring buffer (sized MEASURE_FRAME_TIME_SAMPLES_COUNT,
currently 2048). The previous loop guarded against an underflowed index
with:
if (i > frame_time_index)
continue;
frame_time_i = video_st->frame_time_samples[frame_time_index - i];
This was intended as protection for the early-startup case, but the
caller already gates invocation on
video_st->frame_count > frame_time_interval, so that case cannot
actually occur.
What the guard did do in practice was silently skip samples whenever
frame_time_index was small due to a ring wrap. On every wrap (every
2048 frames, ~34 seconds at 60 Hz), up to `frame_time_interval` samples
at the end of the buffer were never read, leaving an 8-sample blind
window in the averaging logic. frame_time_avg for that cycle could
collapse to 0, skewing the auto frame-delay adjustment and resetting
count_pos_avg spuriously.
Replace the guard with a proper modular read index, matching how the
buffer is written (video_driver.c:~4219). This removes the blind window
and reads contiguous wrapped samples correctly.
No functional change when frame_time_index >= frame_time_interval
(the common case); correct behavior at the wrap boundary.
Add XRandR-based implementations of get_refresh_rate and get_video_output_size to dispserv_x11.c. get_refresh_rate queries the first connected output's current CRTC mode and computes the rate from dotClock / (hTotal * vTotal). get_video_output_size queries the first connected output's current CRTC for width and height. Both functions are guarded by #ifdef HAVE_XRANDR and fall back to NULL (triggering the poke/ctx fallback) when XRandR is not available. This completes the X11 display server — it now handles metrics, refresh rate, and video output size, matching the Win32 display server's coverage.
Add XRandR-based implementations of get_video_output_prev and get_video_output_next to dispserv_x11.c. Both functions find the first connected output's active CRTC, locate the current mode in the output's mode list, then apply the adjacent mode via XRRSetCrtcConfig. Guarded by #ifdef HAVE_XRANDR. Without XRandR the vtable entries are NULL (no-op fallback). This completes all 5 display server slots for X11: get_refresh_rate, get_video_output_size, get_video_output_prev, get_video_output_next, and get_metrics.
When XRandR is not available, provide a fallback get_video_output_size using standard Xlib DisplayWidth/ DisplayHeight. This ensures the max window size query works on X11 even without the XRandR extension. Refresh rate and output prev/next remain NULL without XRandR — there is no standard Xlib API for these.
The debug log in video_frame_delay_auto() reads the last 8 samples from frame_time_samples using raw `frame_time_index - N` expressions, guarded by `if (frame_time_index > frame_time_frames)` to avoid negative indices. Two small issues with this: 1. On every ring wrap (every MEASURE_FRAME_TIME_SAMPLES_COUNT frames), the guard suppresses the log entirely, producing gaps in debug output that make it harder to correlate with averager behaviour at wrap boundaries. 2. The samples logged for the non-wrap case happen to be the same samples the averager used, but that was coincidental — the averager used its own guard and now (post-5e4f08f) properly wraps. The debug output should match. Drop the guard and mask the indices the same way the averager does, so the debug output always logs the same 8 samples the averager actually consumed, including at wrap boundaries. Debug-only code path (FRAME_DELAY_AUTO_DEBUG defaults to 0); no behaviour change in release builds.
Add dispserv_wl.c — a display server for Wayland that implements get_refresh_rate, get_video_output_size, and get_metrics. The display server opens its own wl_display connection independent of the context driver, binds to the first wl_output global, and collects mode and geometry events via two roundtrips during init. This makes display metrics available before the video context driver creates a window, supporting the early display server init path. - get_refresh_rate: from wl_output.mode event (mHz / 1000) - get_video_output_size: from wl_output.mode event (width, height) - get_metrics: MM dimensions from wl_output.geometry, pixel dimensions from wl_output.mode, DPI computed from both get_video_output_prev/next are NULL — Wayland does not allow clients to change display modes. Wire dispserv_wl into video_display_server_init for RARCH_DISPLAY_WAYLAND. The Unix frontend's get_display_type already detects Wayland via the WAYLAND_DISPLAY environment variable (added in commit 2011892). Remove gfx_ctx_wl_get_refresh_rate and gfx_ctx_wl_get_metrics_common from the Wayland context driver tables (wayland_ctx.c, wayland_vk_ctx.c) — these are now handled by the display server. Tested with a mock Wayland compositor exercising the full protocol path: wl_display_connect, wl_registry bind, wl_output.geometry and wl_output.mode event reception. Verified refresh rate, output size, physical dimensions, and DPI computation.
Add get_refresh_rate and get_video_output_size to dispserv_kms.c using the existing DRM globals: - get_refresh_rate: calls drm_calc_refresh_rate(g_drm_mode), the same calculation already used by set_resolution and get_resolution_list - get_video_output_size: reads g_drm_mode->hdisplay/vdisplay These globals are set by the DRM context driver (drm_ctx.c) during init, so the functions return valid data once the video driver is initialized. This completes the Linux display server coverage — X11, Wayland, and KMS all now implement refresh rate and output size queries.
Move x11_get_metrics from gfx/common/x11_common.c to gfx/display_servers/dispserv_x11.c and make it static. Remove the declaration from x11_common.h. The function is only called from the X11 display server vtable — the context driver entries were NULLed in commit 8d366ea. This matches the Win32 pattern where display query implementations live in the display server driver. Pure code motion — no functional change.
Remove gfx_ctx_wl_get_metrics_common and gfx_ctx_wl_get_refresh_rate from gfx/common/wayland_common.c and their declarations from gfx/common/wayland_common.h. These functions were used by the Wayland context driver tables (wayland_ctx.c, wayland_vk_ctx.c), which were NULLed in commit 9cdd011 when dispserv_wl was introduced. The display server has its own independent wl_display connection and does not use these functions.
The previous/next video output functions on Win32 were no-ops — they enumerated display modes to find the adjacent resolution but never applied it. This has been broken since the functions were first written. Fix both functions to call win32_change_display_settings after finding the target mode: - get_video_output_prev: tracks the last mode with a different resolution, applies it when the current resolution is matched - get_video_output_next: sets a found flag on the current resolution match, applies the first subsequent mode with a different resolution This makes the resolution left/right controls in the menu (Settings > Video > Output) functional on Win32.
Remove the get_refresh_rate, get_video_output_size, get_video_output_prev, and get_video_output_next poke functions from gl1, gl2, and gl3. These were pure pass-throughs to the context driver — the display server dispatch already falls through to the ctx driver when poke is NULL, so they added an unnecessary hop. Remove drm_get_refresh_rate from the DRM poke tables (drm_gfx, exynos_gfx) and ctx tables (drm_ctx, drm_go2_ctx). The KMS display server (dispserv_kms) performs the identical drm_calc_refresh_rate(g_drm_mode) call and fires first. No platform loses coverage — the dispatch chain changes from display_server → poke → ctx to display_server → ctx, reaching the same endpoint. 12 functions removed, 16 vtable entries NULLed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )