Skip to content

fix: Python SDK all tests passing (346/346 unit, 138/138 cross-SDK)#11

Open
joalves wants to merge 11 commits intomainfrom
fix/python-sdk-all-tests-passing
Open

fix: Python SDK all tests passing (346/346 unit, 138/138 cross-SDK)#11
joalves wants to merge 11 commits intomainfrom
fix/python-sdk-all-tests-passing

Conversation

@joalves
Copy link

@joalves joalves commented Feb 21, 2026

Summary

  • Fix all 5 remaining unit test failures to achieve 346/346 pass rate with exit code 0
  • Add custom_assignments field to ContextConfig alongside legacy cassigmnents typo field
  • Fix double error logging in refresh_async (emitted both string and exception events)
  • Fix DefaultAudienceDeserializer to return None on invalid JSON instead of raising ValueError
  • Fix error message casing, publish race conditions, lock ordering, and null-safety issues

Test Results

  • Unit Tests: 346/346 passed, exit code 0
  • Cross-SDK Tests: 138/138 scenarios passed, exit code 0

Key Changes

  • sdk/context_config.py: Added custom_assignments field
  • sdk/context.py: Read both custom_assignments and cassigmnents; removed redundant error logging; fixed lock ordering and race conditions
  • sdk/default_audience_deserializer.py: Catch all exceptions and return None
  • Various other fixes for casing, null-safety, and thread-safety

Test plan

  • Run python3 -m pytest test/ -v --tb=short — 346/346 passed
  • Run cross-SDK test suite — 138/138 scenarios passed
  • Verify exit code 0

Summary by CodeRabbit

  • New Features

    • Named-parameter SDK creation via ABsmartly.create(...) with timeout/retries and simplified initialization.
    • Audience matching integrated into context handling; new finalize lifecycle event.
  • Deprecations

    • cassigmnents deprecated in favour of custom_assignments.
    • Class name standardised to ABsmartly (legacy alias retained).
  • Documentation

    • README and examples updated with unified init flow and platform-specific usage.
  • Tests

    • Extensive new and updated unit tests covering concurrency, publishing, context flows and utilities.
  • Chores

    • Project ignore list expanded (various tooling and metadata files).

- Add custom_field_value method for accessing custom field values
- Fix context_event_logger event types
- Fix default_variable_parser for better type handling
- Fix variant_assigner for consistent behavior
- Fix type coercion for custom field values (json, number, boolean types)
The attrs_seq assignment was happening during a read operation but mutating
shared state. This could cause race conditions with concurrent access.
The attrs_seq is already properly set in the write path when assignments
are created.
- Add SDK initialization tests
- Add provider and publisher tests
- Add publish timeout tests
- Add attribute management tests
- Add unit validation tests
- Add concurrent operations tests
- Add error recovery tests

Total: 36 new tests added, all 128 tests pass
Add 203 tests aligned with canonical test specs: MD5 (14),
Murmur3 (36), variant assignment (66), audience matcher (3),
and context canonical tests (87). Fix test_variable_value bug
using wrong fixture.
- Add custom_assignments field to ContextConfig alongside legacy cassigmnents
- Fix Context constructor to read both custom_assignments and cassigmnents fields
- Fix double error logging in refresh_async (was emitting string + exception)
- Fix DefaultAudienceDeserializer to return None on invalid JSON instead of raising
- Fix error message casing (ABSmartly -> ABsmartly) for consistency
- Fix publish race condition with read/write lock ordering
- Fix closing_future.exception -> set_exception call
- Fix data_lock release_write -> release_read in read-only methods
- Add null-safety for variable parsing and custom field parsing
- Add refresh timer guard for closed/closing state
@coderabbitai
Copy link

coderabbitai bot commented Feb 21, 2026

Walkthrough

This PR renames ABSmartly/ABSmartlyConfig to ABsmartly/ABsmartlyConfig with backward aliases, adds ABsmartly.create(...) factory, and centralises package exports. It refactors Client HTTP response handling, adds audience-matching and assignment/audience checks in Context, introduces attribute sequencing and optional assignment variables, and tightens concurrency primitives. Several parsers/deserializers now log and raise on errors and regex matching gains timeouts. Documentation and examples are updated to the new API, and a large suite of unit tests and example scripts are added or extended. Minor .gitignore and other small internal cleanups are included.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~55 minutes

Poem

A rabbit nibbles on tidy code,
Renames hop in a careful mode,
Factories sprout where constructors stood,
Tests and docs parade through the wood,
Safe locks, timeouts — the garden's good. 🐇

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 1.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: achieving full test passage (346 unit tests and 138 cross-SDK scenarios), which is the primary objective of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/python-sdk-all-tests-passing

Comment @coderabbitai help to get the list of available commands and usage tips.

Fix BinaryOperator for proper numeric type coercion and InOperator
for string containment and collection membership checks.
…lias

The misspelled field cassigmnents is now a deprecated property alias
that forwards to the correctly named custom_assignments field.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (11)
sdk/internal/murmur32.py (1)

66-67: Misleading function name: performs left rotation, not right.

The function is named rotate_right but the implementation (n << d) | (n >> (32 - d)) performs a left rotation (ROTL). This matches the MurmurHash3 algorithm correctly, but the naming could confuse future maintainers.

♻️ Suggested rename for clarity
-def rotate_right(n, d):
+def rotate_left(n, d):
     return (n << d) | (n >> (32 - d)) & 0xFFFFFFFF

Note: This would require updating the call sites at lines 26, 30, and 46.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/internal/murmur32.py` around lines 66 - 67, The function rotate_right is
misnamed because it implements a left rotation; rename the function to
rotate_left (or rotate_rotl) and update all call sites that reference
rotate_right to the new name (check uses in this module). While renaming, also
tighten the masking by returning ((n << d) | (n >> (32 - d))) & 0xFFFFFFFF so
the mask applies to the combined result and preserves 32-bit behavior; keep the
existing semantics otherwise.
sdk/internal/lock/atomic_bool.py (1)

19-21: Align AtomicInt encapsulation with AtomicBool for consistency.

AtomicBool uses a private _value attribute with a @property decorator for thread-safe access, ensuring all modifications go through the lock. AtomicInt, however, uses a public value attribute without property protection. This means direct assignments like int_obj.value = 5 bypass the lock, creating an inconsistency. Apply the same property-based encapsulation pattern to AtomicInt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/internal/lock/atomic_bool.py` around lines 19 - 21, Make AtomicInt match
AtomicBool's encapsulation by renaming its public value attribute to a private
_value and exposing a thread-safe `@property` getter and setter that acquire
self._lock; update all AtomicInt methods (e.g.,
increment/decrement/add/get/whatever methods referencing value) to use
self._value internally and remove direct public assignments so external code
must go through the property, ensuring all reads/writes are performed under the
lock and preserving existing method semantics and return values.
sdk/context.py (2)

262-264: Unused loop variable value.

The loop variable value is not used within the loop body. Use _ to indicate it's intentionally unused.

♻️ Proposed fix
-                        for key, value in variables.items():
+                        for key, _ in variables.items():
                             index_variables[key] = experiment_variables
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 262 - 264, The loop currently unpacks (key,
value) from variables.items() but never uses value; change the unpack to (key,
_) in that loop so the intent is clear and to avoid an unused-variable warning —
update the loop where index_variables is populated (the for key, value in
variables.items() block that assigns index_variables[key] =
experiment_variables) to use an underscore for the unused second element.

176-181: Remove unused helper method _handle_future_callback.

This helper method is defined but never called anywhere in the class. The existing future callback handling still uses inline implementations.

♻️ Proposed fix: Remove dead code
-    def _handle_future_callback(self, future: Future, on_success, on_error):
-        """Helper method to reduce duplication in future callback handling."""
-        if future.done() and not future.cancelled() and future.exception() is None:
-            on_success(future.result())
-        elif not future.cancelled() and future.exception() is not None:
-            on_error(future.exception())
-

Alternatively, refactor the existing callbacks to use this helper if deduplication is desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 176 - 181, The method _handle_future_callback in
the class is dead code (defined but never used); remove its entire definition
from sdk/context.py to eliminate unused helper, or alternatively refactor
existing inline future callbacks to call _handle_future_callback in places where
Future callbacks are created (so update those callback registrations to pass the
Future and on_success/on_error handlers). Ensure no remaining references to
_handle_future_callback remain after the change.
sdk/jsonexpr/operators/match_operator.py (1)

12-26: Signal-based timeout only works on Unix systems.

The signal.SIGALRM approach for regex timeouts is Unix-only. The fallback on lines 25-26 silently yields without any timeout protection on Windows. This is acceptable for cross-platform compatibility, but consider logging a debug message when the fallback is triggered so that users are aware timeouts are not enforced.

💡 Optional: Log when timeout protection is unavailable
     except (AttributeError, ValueError):
+        logger.debug("Signal-based timeout unavailable on this platform")
         yield
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/jsonexpr/operators/match_operator.py` around lines 12 - 26, The
signal-based timeout in the timeout contextmanager (function timeout and inner
timeout_handler) silently falls back on non-Unix platforms; modify the except
(AttributeError, ValueError) block to emit a debug-level log message indicating
that SIGALRM is unavailable and regex timeouts will not be enforced on this
platform (use the module logger via logging.getLogger(__name__) or an existing
logger), then yield as before so behavior is unchanged except for the debug
notification.
test/test_named_param_init.py (1)

1-8: Remove unused import.

The patch import from unittest.mock is not used in this test file.

♻️ Proposed fix
 import unittest
-from unittest.mock import Mock, patch
+from unittest.mock import Mock

 from sdk.absmartly import ABsmartly
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_named_param_init.py` around lines 1 - 8, The import 'patch' from
unittest.mock in test_named_param_init.py is unused; remove it by deleting
'patch' from the import statement (leaving "from unittest.mock import Mock") or
simplify the import to only what is used so there are no unused imports
(reference symbol: patch).
test/test_publisher.py (2)

5-6: Remove unused import.

ConnectionError is imported but not used in any test.

♻️ Proposed fix
 from requests import Response
-from requests.exceptions import Timeout, HTTPError, ConnectionError
+from requests.exceptions import Timeout, HTTPError
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_publisher.py` around lines 5 - 6, Remove the unused import
ConnectionError from the import line in test/test_publisher.py; update the
import statement that currently reads "from requests.exceptions import Timeout,
HTTPError, ConnectionError" to only import the exceptions actually used (e.g.,
"Timeout, HTTPError") so there are no unused symbols.

78-84: Remove or utilise the unused result variables.

The result variables on lines 79 and 155 are assigned but never used. Either remove the assignment or add assertions on the result.

♻️ Proposed fix
         future = self.publisher.publish(mock_context, event)
-        result = future.result(timeout=5)
+        future.result(timeout=5)  # Ensures completion without timeout

         self.http_client.put.assert_called_once()

And similarly for line 155.

Also applies to: 154-159

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_publisher.py` around lines 78 - 84, The test assigns result =
future.result(timeout=5) but never uses it; either remove the unused assignment
or assert on the returned value. Update the tests around
self.publisher.publish(...) (the variables future and result) to drop the result
assignment if it's not needed, or add assertions that validate the published
response (e.g., the content, status, or type returned by future.result()) so the
result variable is used; apply the same change for the second occurrence around
line 155 that also assigns result without using it.
test/test_sdk.py (1)

1-4: Remove unused import.

The patch import from unittest.mock is not used in this test file.

♻️ Proposed fix
 import unittest
 from concurrent.futures import Future
-from unittest.mock import MagicMock, Mock, patch
+from unittest.mock import MagicMock, Mock
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_sdk.py` around lines 1 - 4, Remove the unused import "patch" from
the unittest.mock import list in test/test_sdk.py; edit the import line that
currently reads "from unittest.mock import MagicMock, Mock, patch" to only
import the used symbols (MagicMock, Mock) so there are no unused imports in the
test module.
sdk/__init__.py (1)

11-23: Consider sorting __all__ alphabetically for consistency.

The static analysis tool flags that __all__ is not sorted. While the current grouping (primary class followed by its alias) is logical, alphabetical sorting is a common convention that aids discoverability.

♻️ Optional: Sort __all__ alphabetically
 __all__ = [
     "ABsmartly",
     "ABSmartly",
     "ABsmartlyConfig",
     "ABSmartlyConfig",
     "Client",
     "ClientConfig",
     "Context",
     "ContextConfig",
     "ContextEventLogger",
     "DefaultHTTPClient",
     "DefaultHTTPClientConfig",
 ]

Note: The current list is already nearly sorted; only minor reordering would be needed if strict alphabetical order is desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/__init__.py` around lines 11 - 23, The __all__ export list is not
strictly alphabetized; reorder the entries in the __all__ list so they are in
alphabetical order (e.g., "ABSmartly" before "ABsmartly", etc.) to satisfy the
linter and improve consistency—update the list that currently contains
"ABsmartly", "ABSmartly", "ABsmartlyConfig", "ABSmartlyConfig", "Client",
"ClientConfig", "Context", "ContextConfig", "ContextEventLogger",
"DefaultHTTPClient", "DefaultHTTPClientConfig" to be sorted alphabetically while
keeping the same symbols and aliases.
test/test_context_canonical.py (1)

978-1001: Avoid fixed sleep() in delay tests to reduce flakiness.

The hard-coded time.sleep(0.3) makes these tests timing-sensitive under CI load. Prefer polling with a bounded timeout and early exit once the queue drains.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 978 - 1001, Replace the fixed
time.sleep(0.3) in test_publish_delay_triggers_after_exposure and
test_publish_delay_triggers_after_goal with a bounded polling loop that checks
context.get_pending_count() until it reaches 0 or a max wait (e.g. 1s) elapses,
sleeping a short interval between polls (e.g. 10-50ms) to avoid busy-waiting;
keep assertions that timeout is set after scheduling and assert pending is 0
after the loop, and ensure context.close() is still called in the exposure test
(and add it to the goal test if needed).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 20-25: The dependency code fence in README.md is missing a
language specifier; update the fenced block that contains the four dependency
lines (setuptools, requests, urllib3, jsons) to include a language token (e.g.,
change ``` to ```text or ```ini) so the markdown linter and syntax highlighting
treat it as plain text.
- Around line 145-147: The code wrongly awaits a concurrent.futures.Future
returned by ctx.wait_until_ready_async(); update the call site that currently
does await ctx.wait_until_ready_async() to instead call
ctx.wait_until_ready_async() (and if you need to block, call .result() on the
returned Future), i.e., remove the await and keep the async Future-handling
pattern used where ctx = sdk.create_context(context_config) is created and later
wait_until_ready_async() is invoked without awaiting.

In `@sdk/context_config.py`:
- Line 11: Replace the incorrect dict literal type annotation for the attribute
custom_assignments with a proper typing annotation: import Optional, Dict, and
Any from typing and change custom_assignments: {} = None to use
Optional[Dict[str, Any]] (or Optional[dict] if you prefer) so type checkers can
understand the attribute; update any other similar attributes in the same file
to follow the same pattern and ensure the typing imports are added at the top of
sdk/context_config.py.

In `@sdk/context.py`:
- Around line 183-188: There are two identical definitions of the method
_build_audience_attributes which causes the latter to shadow the former; remove
the duplicate definition (the second occurrence) and keep the original
implementation so only one _build_audience_attributes exists, ensuring any
references to self.attributes and audience_attributes remain unchanged and tests
still pass.

In `@sdk/default_audience_deserializer.py`:
- Around line 18-20: The null-check uses bytes_[offset:offset+length] but
jsons.loadb is still called with the full bytes_ buffer; change the
deserialization to operate on the same slice used for the guard by passing
bytes_[offset:offset+length] (or a memoryview(bytes_)[offset:offset+length])
into jsons.loadb so parsing consistently respects offset and length; update the
return line that currently calls jsons.loadb(bytes_, dict) to use the sliced
segment instead.

In `@sdk/default_variable_parser.py`:
- Around line 20-25: The parse method's return annotation is incorrect: update
the typing to import Any and change the return type from Optional[dict] to
Optional[Any] on the parse method and on the parent interface declaration (the
VariableParser.parse signature), and simplify the implementation to return
json.loads(config) directly (no conditional dict check); ensure both the
implementation (parse) and the interface signature are updated to Optional[Any].

In `@sdk/jsonexpr/operators/match_operator.py`:
- Around line 49-54: In the except handlers inside the regex matching code (in
match_operator.py, within the function performing regex execution), remove the
unnecessary f-string prefix from the TimeoutError warning (change
logger.warning(f"...") to logger.warning("...")) and replace logger.error(...)
in the generic except Exception as e block with logger.exception(...) so the
stack trace and exception context are logged; keep returning None as before.

In `@test/test_concurrency.py`:
- Around line 151-193: Add assertions to verify publish actually ran and the
queue is drained: after joining threads assert publish_count[0] == 5 to confirm
counting_publish was called for each tracked goal, keep the existing
self.assertEqual(0, len(errors)), and then assert any publish queue is empty —
e.g. if the context exposes a flush()/join_publisher() use that and assert it
completed, or if it has an internal queue like context._publish_queue assert
context._publish_queue.empty() before calling context.close(); reference
symbols: test_concurrent_publish, counting_publish, publish_count,
context.track, context.publish(), and context.close().

In `@test/test_context_canonical.py`:
- Around line 157-164: The test helper create_ready_context currently maps the
legacy 'cassignments' kwarg to the misspelled legacy attribute 'cassigmnents' on
ContextConfig, so the new canonical path ContextConfig.custom_assignments isn't
exercised; update create_ready_context to accept and set
kwargs['custom_assignments'] onto config.custom_assignments (and still preserve
backward compatibility by mapping 'cassignments' to both
config.custom_assignments and the legacy config.cassigmnents if needed), and fix
the typo reference to 'cassigmnents' where appropriate so tests exercise the
canonical custom_assignments path.

In `@test/test_sdk.py`:
- Around line 55-64: The test test_sdk_create_missing_api_key constructs a
Client with no api_key but never asserts behavior and leaves the unused client
variable; decide on desired behavior and fix both implementation and test: if
api_key must be required, add validation in Client.__init__ (in the Client
class) to raise a ValueError (or a specific exception) when
client_config.api_key is missing, then change test_sdk_create_missing_api_key to
assertRaises that exception when instantiating Client; otherwise, if missing
api_key is acceptable, remove the unused client variable from the test (or
replace it with an explicit assertion documenting acceptance, e.g.,
assertIsNone(client_config.api_key) and a comment) and keep Client.__init__
unchanged.

In `@test/test_variant_assigner.py`:
- Around line 8-10: The test helper uses hashlib.md5 in function hash_unit to
create deterministic fixtures, so add a targeted Ruff suppression to silence
S324: append an inline comment like "# noqa: S324" to the hashlib.md5 call (the
line assigning dig) and include a short explanatory comment that this use is
non-cryptographic / for deterministic testing only; leave the rest of hash_unit
unchanged.

---

Nitpick comments:
In `@sdk/__init__.py`:
- Around line 11-23: The __all__ export list is not strictly alphabetized;
reorder the entries in the __all__ list so they are in alphabetical order (e.g.,
"ABSmartly" before "ABsmartly", etc.) to satisfy the linter and improve
consistency—update the list that currently contains "ABsmartly", "ABSmartly",
"ABsmartlyConfig", "ABSmartlyConfig", "Client", "ClientConfig", "Context",
"ContextConfig", "ContextEventLogger", "DefaultHTTPClient",
"DefaultHTTPClientConfig" to be sorted alphabetically while keeping the same
symbols and aliases.

In `@sdk/context.py`:
- Around line 262-264: The loop currently unpacks (key, value) from
variables.items() but never uses value; change the unpack to (key, _) in that
loop so the intent is clear and to avoid an unused-variable warning — update the
loop where index_variables is populated (the for key, value in variables.items()
block that assigns index_variables[key] = experiment_variables) to use an
underscore for the unused second element.
- Around line 176-181: The method _handle_future_callback in the class is dead
code (defined but never used); remove its entire definition from sdk/context.py
to eliminate unused helper, or alternatively refactor existing inline future
callbacks to call _handle_future_callback in places where Future callbacks are
created (so update those callback registrations to pass the Future and
on_success/on_error handlers). Ensure no remaining references to
_handle_future_callback remain after the change.

In `@sdk/internal/lock/atomic_bool.py`:
- Around line 19-21: Make AtomicInt match AtomicBool's encapsulation by renaming
its public value attribute to a private _value and exposing a thread-safe
`@property` getter and setter that acquire self._lock; update all AtomicInt
methods (e.g., increment/decrement/add/get/whatever methods referencing value)
to use self._value internally and remove direct public assignments so external
code must go through the property, ensuring all reads/writes are performed under
the lock and preserving existing method semantics and return values.

In `@sdk/internal/murmur32.py`:
- Around line 66-67: The function rotate_right is misnamed because it implements
a left rotation; rename the function to rotate_left (or rotate_rotl) and update
all call sites that reference rotate_right to the new name (check uses in this
module). While renaming, also tighten the masking by returning ((n << d) | (n >>
(32 - d))) & 0xFFFFFFFF so the mask applies to the combined result and preserves
32-bit behavior; keep the existing semantics otherwise.

In `@sdk/jsonexpr/operators/match_operator.py`:
- Around line 12-26: The signal-based timeout in the timeout contextmanager
(function timeout and inner timeout_handler) silently falls back on non-Unix
platforms; modify the except (AttributeError, ValueError) block to emit a
debug-level log message indicating that SIGALRM is unavailable and regex
timeouts will not be enforced on this platform (use the module logger via
logging.getLogger(__name__) or an existing logger), then yield as before so
behavior is unchanged except for the debug notification.

In `@test/test_context_canonical.py`:
- Around line 978-1001: Replace the fixed time.sleep(0.3) in
test_publish_delay_triggers_after_exposure and
test_publish_delay_triggers_after_goal with a bounded polling loop that checks
context.get_pending_count() until it reaches 0 or a max wait (e.g. 1s) elapses,
sleeping a short interval between polls (e.g. 10-50ms) to avoid busy-waiting;
keep assertions that timeout is set after scheduling and assert pending is 0
after the loop, and ensure context.close() is still called in the exposure test
(and add it to the goal test if needed).

In `@test/test_named_param_init.py`:
- Around line 1-8: The import 'patch' from unittest.mock in
test_named_param_init.py is unused; remove it by deleting 'patch' from the
import statement (leaving "from unittest.mock import Mock") or simplify the
import to only what is used so there are no unused imports (reference symbol:
patch).

In `@test/test_publisher.py`:
- Around line 5-6: Remove the unused import ConnectionError from the import line
in test/test_publisher.py; update the import statement that currently reads
"from requests.exceptions import Timeout, HTTPError, ConnectionError" to only
import the exceptions actually used (e.g., "Timeout, HTTPError") so there are no
unused symbols.
- Around line 78-84: The test assigns result = future.result(timeout=5) but
never uses it; either remove the unused assignment or assert on the returned
value. Update the tests around self.publisher.publish(...) (the variables future
and result) to drop the result assignment if it's not needed, or add assertions
that validate the published response (e.g., the content, status, or type
returned by future.result()) so the result variable is used; apply the same
change for the second occurrence around line 155 that also assigns result
without using it.

In `@test/test_sdk.py`:
- Around line 1-4: Remove the unused import "patch" from the unittest.mock
import list in test/test_sdk.py; edit the import line that currently reads "from
unittest.mock import MagicMock, Mock, patch" to only import the used symbols
(MagicMock, Mock) so there are no unused imports in the test module.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7910b9b and 08d140c.

📒 Files selected for processing (35)
  • .gitignore
  • README.md
  • example/example.py
  • example/simple_example.py
  • sdk/__init__.py
  • sdk/absmartly.py
  • sdk/absmartly_config.py
  • sdk/client.py
  • sdk/context.py
  • sdk/context_config.py
  • sdk/context_event_logger.py
  • sdk/default_audience_deserializer.py
  • sdk/default_context_data_deserializer.py
  • sdk/default_variable_parser.py
  • sdk/internal/lock/atomic_bool.py
  • sdk/internal/lock/concurrency.py
  • sdk/internal/murmur32.py
  • sdk/internal/variant_assigner.py
  • sdk/jsonexpr/expr_evaluator.py
  • sdk/jsonexpr/operators/binary_operator.py
  • sdk/jsonexpr/operators/in_operator.py
  • sdk/jsonexpr/operators/match_operator.py
  • sdk/jsonexpr/operators/not_operator.py
  • test/test_audience_matcher.py
  • test/test_client.py
  • test/test_concurrency.py
  • test/test_context.py
  • test/test_context_canonical.py
  • test/test_md5.py
  • test/test_murmur32.py
  • test/test_named_param_init.py
  • test/test_provider.py
  • test/test_publisher.py
  • test/test_sdk.py
  • test/test_variant_assigner.py
💤 Files with no reviewable changes (1)
  • sdk/jsonexpr/operators/not_operator.py

Comment on lines 20 to 25
```
setuptools~=60.2.0
requests~=2.28.1
urllib3~=1.26.12
setuptools~=60.2.0
requests~=2.28.1
urllib3~=1.26.12
jsons~=1.6.3
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Specify a language for the dependencies code fence.

This block is currently untyped, which triggers markdown linting and weakens formatting consistency.

Proposed README fix
-```
+```text
 setuptools~=60.2.0
 requests~=2.28.1
 urllib3~=1.26.12
 jsons~=1.6.3
</details>

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.21.0)</summary>

[warning] 20-20: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @README.md around lines 20 - 25, The dependency code fence in README.md is
missing a language specifier; update the fenced block that contains the four
dependency lines (setuptools, requests, urllib3, jsons) to include a language
token (e.g., change totext or ```ini) so the markdown linter and syntax
highlighting treat it as plain text.


</details>

<!-- fingerprinting:phantom:poseidon:hawk -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +145 to +147
ctx = sdk.create_context(context_config)
ctx.wait_until_ready_async()
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, check if sdk/context.py exists and find the method definition
find . -name "context.py" -type f | head -20

Repository: absmartly/python3-sdk

Length of output: 81


🏁 Script executed:

# Search for wait_until_ready_async definition and usage
rg -n "def wait_until_ready_async|wait_until_ready_async" --type py -A 3 | head -50

Repository: absmartly/python3-sdk

Length of output: 1298


🏁 Script executed:

# Check the README.md at the specified lines (145-147 and 482-484)
cat -n README.md | sed -n '140,150p'
cat -n README.md | sed -n '477,490p'

Repository: absmartly/python3-sdk

Length of output: 848


🏁 Script executed:

cat -n sdk/context.py | sed -n '574,600p'

Repository: absmartly/python3-sdk

Length of output: 1078


Fix incorrect await usage in line 483.

The wait_until_ready_async() method returns a concurrent.futures.Future object, which is not awaitable. Line 483 incorrectly uses await ctx.wait_until_ready_async(), which will raise a TypeError at runtime. Line 146 shows the correct pattern: call the method directly without await and optionally call .result() on the returned Future to block until ready. Remove the await from line 483 to match the correct usage demonstrated in line 146.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 145 - 147, The code wrongly awaits a
concurrent.futures.Future returned by ctx.wait_until_ready_async(); update the
call site that currently does await ctx.wait_until_ready_async() to instead call
ctx.wait_until_ready_async() (and if you need to block, call .result() on the
returned Future), i.e., remove the await and keep the async Future-handling
pattern used where ctx = sdk.create_context(context_config) is created and later
wait_until_ready_async() is invoked without awaiting.

publish_delay: int = 50 # seconds
event_logger: Optional[ContextEventLogger] = None
cassigmnents: {} = None
custom_assignments: {} = None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd /repo && cat -n sdk/context_config.py | head -40

Repository: absmartly/python3-sdk

Length of output: 120


🏁 Script executed:

rg -nP --type=py ':\s*\{\}\s*=' sdk/context_config.py

Repository: absmartly/python3-sdk

Length of output: 182


🏁 Script executed:

# Check imports in the file
head -20 sdk/context_config.py

Repository: absmartly/python3-sdk

Length of output: 596


Use proper type annotations for dict attributes instead of dict literals.

Lines 11–14 use dict literals ({}) as type annotations, which type checkers cannot interpret correctly. Replace with Optional[dict] or the more explicit Optional[Dict[str, Any]] (import Dict from typing if needed).

Proposed fix
-    custom_assignments: {} = None
-    overrides: {} = None
-    attributes: {} = None
-    units: {} = None
+    custom_assignments: Optional[dict] = None
+    overrides: Optional[dict] = None
+    attributes: Optional[dict] = None
+    units: Optional[dict] = None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context_config.py` at line 11, Replace the incorrect dict literal type
annotation for the attribute custom_assignments with a proper typing annotation:
import Optional, Dict, and Any from typing and change custom_assignments: {} =
None to use Optional[Dict[str, Any]] (or Optional[dict] if you prefer) so type
checkers can understand the attribute; update any other similar attributes in
the same file to follow the same pattern and ensure the typing imports are added
at the top of sdk/context_config.py.

Comment on lines +183 to +188
def _build_audience_attributes(self):
"""Helper method to build audience attributes dictionary."""
audience_attributes = {}
for key in self.attributes:
audience_attributes[key.name] = key.value
return audience_attributes
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate method definition: _build_audience_attributes is defined twice.

The method _build_audience_attributes is defined at lines 183-188 and again at lines 715-720. The second definition shadows the first. While both implementations are functionally equivalent, this is likely unintentional and should be consolidated.

🐛 Proposed fix: Remove the duplicate

Remove the second definition at lines 715-720, keeping only the first one at lines 183-188.

-    def _build_audience_attributes(self):
-        """Helper method to build audience attributes map from current attributes."""
-        attrs = {}
-        for attr in self.attributes:
-            attrs[attr.name] = attr.value
-        return attrs
-

Also applies to: 715-720

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 183 - 188, There are two identical definitions
of the method _build_audience_attributes which causes the latter to shadow the
former; remove the duplicate definition (the second occurrence) and keep the
original implementation so only one _build_audience_attributes exists, ensuring
any references to self.attributes and audience_attributes remain unchanged and
tests still pass.

Comment on lines +18 to 20
if bytes_ == b'null' or (offset == 0 and length == 4 and bytes_[offset:offset+length] == b'null'):
return None
return jsons.loadb(bytes_, dict)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use offset/length consistently for parsing.

The new null guard only partially respects slicing, while deserialisation still uses full bytes_. A valid segment at non-zero offset can be misread.

Proposed fix
         try:
-            if bytes_ == b'null' or (offset == 0 and length == 4 and bytes_[offset:offset+length] == b'null'):
+            segment = bytes_[offset:offset + length]
+            if segment.strip() == b"null":
                 return None
-            return jsons.loadb(bytes_, dict)
+            return jsons.loadb(segment, dict)
         except Exception as e:
             logger.error(f"Failed to deserialize audience filter: {e}")
             return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/default_audience_deserializer.py` around lines 18 - 20, The null-check
uses bytes_[offset:offset+length] but jsons.loadb is still called with the full
bytes_ buffer; change the deserialization to operate on the same slice used for
the guard by passing bytes_[offset:offset+length] (or a
memoryview(bytes_)[offset:offset+length]) into jsons.loadb so parsing
consistently respects offset and length; update the return line that currently
calls jsons.loadb(bytes_, dict) to use the sliced segment instead.

Comment on lines +49 to +54
except TimeoutError:
logger.warning(f"Regex execution timeout (potential ReDoS)")
return None
except Exception as e:
logger.error(f"Unexpected error in regex matching: {e}")
return None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the extraneous f prefix and use logging.exception for unexpected errors.

Line 50 contains an f-string without any placeholders, and line 53 uses logger.error which loses the stack trace for unexpected exceptions.

🐛 Proposed fix
                 except TimeoutError:
-                    logger.warning(f"Regex execution timeout (potential ReDoS)")
+                    logger.warning("Regex execution timeout (potential ReDoS)")
                     return None
                 except Exception as e:
-                    logger.error(f"Unexpected error in regex matching: {e}")
+                    logger.exception("Unexpected error in regex matching: %s", e)
                     return None
🧰 Tools
🪛 Ruff (0.15.2)

[error] 50-50: f-string without any placeholders

Remove extraneous f prefix

(F541)


[warning] 52-52: Do not catch blind exception: Exception

(BLE001)


[warning] 53-53: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/jsonexpr/operators/match_operator.py` around lines 49 - 54, In the except
handlers inside the regex matching code (in match_operator.py, within the
function performing regex execution), remove the unnecessary f-string prefix
from the TimeoutError warning (change logger.warning(f"...") to
logger.warning("...")) and replace logger.error(...) in the generic except
Exception as e block with logger.exception(...) so the stack trace and exception
context are logged; keep returning None as before.

Comment on lines +151 to +193
def test_concurrent_publish(self):
self.set_up()
config = ContextConfig()
config.units = self.units
context = self.create_test_context(config, self.data_future_ready)
self.assertTrue(context.is_ready())

publish_count = [0]
lock = threading.Lock()

def counting_publish(event):
with lock:
publish_count[0] += 1
future = Future()
time.sleep(0.01)
future.set_result(None)
return future

self.client.publish = counting_publish

for i in range(5):
context.track(f"goal_{i}", {"amount": i * 100})

errors = []

def call_publish():
try:
context.publish()
except Exception as e:
errors.append(e)

threads = []
for _ in range(5):
t = threading.Thread(target=call_publish)
threads.append(t)
t.start()

for t in threads:
t.join()

self.assertEqual(0, len(errors))

context.close()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Strengthen test_concurrent_publish with behavioural assertions.

The test increments publish_count but never verifies it. As written, it can pass even if no publish call happens. Please assert publish activity (and ideally queue drain) after joins.

Proposed assertion additions
         for t in threads:
             t.join()

         self.assertEqual(0, len(errors))
+        self.assertGreater(publish_count[0], 0)
+        self.assertEqual(0, context.get_pending_count())

         context.close()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_concurrent_publish(self):
self.set_up()
config = ContextConfig()
config.units = self.units
context = self.create_test_context(config, self.data_future_ready)
self.assertTrue(context.is_ready())
publish_count = [0]
lock = threading.Lock()
def counting_publish(event):
with lock:
publish_count[0] += 1
future = Future()
time.sleep(0.01)
future.set_result(None)
return future
self.client.publish = counting_publish
for i in range(5):
context.track(f"goal_{i}", {"amount": i * 100})
errors = []
def call_publish():
try:
context.publish()
except Exception as e:
errors.append(e)
threads = []
for _ in range(5):
t = threading.Thread(target=call_publish)
threads.append(t)
t.start()
for t in threads:
t.join()
self.assertEqual(0, len(errors))
context.close()
def test_concurrent_publish(self):
self.set_up()
config = ContextConfig()
config.units = self.units
context = self.create_test_context(config, self.data_future_ready)
self.assertTrue(context.is_ready())
publish_count = [0]
lock = threading.Lock()
def counting_publish(event):
with lock:
publish_count[0] += 1
future = Future()
time.sleep(0.01)
future.set_result(None)
return future
self.client.publish = counting_publish
for i in range(5):
context.track(f"goal_{i}", {"amount": i * 100})
errors = []
def call_publish():
try:
context.publish()
except Exception as e:
errors.append(e)
threads = []
for _ in range(5):
t = threading.Thread(target=call_publish)
threads.append(t)
t.start()
for t in threads:
t.join()
self.assertEqual(0, len(errors))
self.assertGreater(publish_count[0], 0)
self.assertEqual(0, context.get_pending_count())
context.close()
🧰 Tools
🪛 Ruff (0.15.2)

[warning] 161-161: Unused function argument: event

(ARG001)


[warning] 179-179: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_concurrency.py` around lines 151 - 193, Add assertions to verify
publish actually ran and the queue is drained: after joining threads assert
publish_count[0] == 5 to confirm counting_publish was called for each tracked
goal, keep the existing self.assertEqual(0, len(errors)), and then assert any
publish queue is empty — e.g. if the context exposes a flush()/join_publisher()
use that and assert it completed, or if it has an internal queue like
context._publish_queue assert context._publish_queue.empty() before calling
context.close(); reference symbols: test_concurrent_publish, counting_publish,
publish_count, context.track, context.publish(), and context.close().

Comment on lines +157 to +164
def create_ready_context(self, **kwargs):
config = ContextConfig()
config.units = kwargs.get('units', self.units)
if 'overrides' in kwargs:
config.overrides = kwargs['overrides']
if 'cassignments' in kwargs:
config.cassigmnents = kwargs['cassignments']
data_future = kwargs.get('data_future', self.data_future_ready)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add canonical wiring for custom_assignments in test setup.

create_ready_context() only routes cassignments to the legacy cassigmnents alias. This means the new ContextConfig.custom_assignments path is not exercised here, so a regression in the canonical field could be missed.

Proposed test-helper update
     def create_ready_context(self, **kwargs):
         config = ContextConfig()
         config.units = kwargs.get('units', self.units)
         if 'overrides' in kwargs:
             config.overrides = kwargs['overrides']
+        if 'custom_assignments' in kwargs:
+            config.custom_assignments = kwargs['custom_assignments']
         if 'cassignments' in kwargs:
             config.cassigmnents = kwargs['cassignments']
         data_future = kwargs.get('data_future', self.data_future_ready)
         return self.create_context(config, data_future)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 157 - 164, The test helper
create_ready_context currently maps the legacy 'cassignments' kwarg to the
misspelled legacy attribute 'cassigmnents' on ContextConfig, so the new
canonical path ContextConfig.custom_assignments isn't exercised; update
create_ready_context to accept and set kwargs['custom_assignments'] onto
config.custom_assignments (and still preserve backward compatibility by mapping
'cassignments' to both config.custom_assignments and the legacy
config.cassigmnents if needed), and fix the typo reference to 'cassigmnents'
where appropriate so tests exercise the canonical custom_assignments path.

Comment on lines +55 to +64
def test_sdk_create_missing_api_key(self):
client_config = ClientConfig()
client_config.endpoint = "https://sandbox.test.io/v1"
client_config.application = "website"
client_config.environment = "dev"

http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
client = Client(client_config, http_client)

self.assertIsNone(client_config.api_key)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test does not verify expected behaviour for missing api_key.

This test creates a Client with a missing api_key but doesn't assert any specific behaviour or failure. The client variable is assigned but never used. Consider either:

  1. Removing this test if missing api_key is acceptable (no validation in Client)
  2. Adding assertions about the expected behaviour (e.g., requests fail with auth errors)
  3. Adding validation to Client.__init__ if missing api_key should raise an error
💡 Suggested improvement

If missing api_key should be allowed, consider documenting this explicitly:

     def test_sdk_create_missing_api_key(self):
+        """Verify that Client can be created without api_key (validation happens on request)."""
         client_config = ClientConfig()
         client_config.endpoint = "https://sandbox.test.io/v1"
         client_config.application = "website"
         client_config.environment = "dev"

         http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
-        client = Client(client_config, http_client)
+        client = Client(client_config, http_client)  # No validation at construction time

         self.assertIsNone(client_config.api_key)
+        # Note: Requests will fail at runtime due to missing X-API-Key header

Or remove the unused variable if no further assertions are needed:

-        client = Client(client_config, http_client)
+        _ = Client(client_config, http_client)  # Should not raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_sdk_create_missing_api_key(self):
client_config = ClientConfig()
client_config.endpoint = "https://sandbox.test.io/v1"
client_config.application = "website"
client_config.environment = "dev"
http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
client = Client(client_config, http_client)
self.assertIsNone(client_config.api_key)
def test_sdk_create_missing_api_key(self):
"""Verify that Client can be created without api_key (validation happens on request)."""
client_config = ClientConfig()
client_config.endpoint = "https://sandbox.test.io/v1"
client_config.application = "website"
client_config.environment = "dev"
http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
client = Client(client_config, http_client) # No validation at construction time
self.assertIsNone(client_config.api_key)
# Note: Requests will fail at runtime due to missing X-API-Key header
🧰 Tools
🪛 Ruff (0.15.2)

[error] 62-62: Local variable client is assigned to but never used

Remove assignment to unused variable client

(F841)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_sdk.py` around lines 55 - 64, The test
test_sdk_create_missing_api_key constructs a Client with no api_key but never
asserts behavior and leaves the unused client variable; decide on desired
behavior and fix both implementation and test: if api_key must be required, add
validation in Client.__init__ (in the Client class) to raise a ValueError (or a
specific exception) when client_config.api_key is missing, then change
test_sdk_create_missing_api_key to assertRaises that exception when
instantiating Client; otherwise, if missing api_key is acceptable, remove the
unused client variable from the test (or replace it with an explicit assertion
documenting acceptance, e.g., assertIsNone(client_config.api_key) and a comment)
and keep Client.__init__ unchanged.

Comment on lines +8 to +10
def hash_unit(unit):
dig = hashlib.md5(str(unit).encode('utf-8')).digest()
return base64.urlsafe_b64encode(dig).rstrip(b'=')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Explore repository structure and find configuration files
git ls-files | grep -E '\.(toml|ini|cfg|yaml|yml)$' | head -20

Repository: absmartly/python3-sdk

Length of output: 136


🏁 Script executed:

# Search for Ruff configuration in common config files
find . -maxdepth 2 -type f \( -name 'pyproject.toml' -o -name 'ruff.toml' -o -name '.ruff.toml' -o -name 'setup.cfg' -o -name 'tox.ini' \)

Repository: absmartly/python3-sdk

Length of output: 93


🏁 Script executed:

# Check if test_variant_assigner.py exists and view the relevant code
find . -type f -name 'test_variant_assigner.py' -exec cat -n {} \; | head -30

Repository: absmartly/python3-sdk

Length of output: 1283


🏁 Script executed:

# Check Ruff configuration
cat ruff.toml

Repository: absmartly/python3-sdk

Length of output: 981


🏁 Script executed:

# Check for Ruff settings in pyproject.toml
cat pyproject.toml

Repository: absmartly/python3-sdk

Length of output: 860


🏁 Script executed:

# Search for any existing noqa comments or S324 references in the codebase
rg -n 'noqa|S324|S303|S324' --type py

Repository: absmartly/python3-sdk

Length of output: 47


Address Ruff S324 on MD5 usage in test helper.

Line 9 triggers the active S324 security-hash lint rule. Since this hash is used for fixture determinism rather than cryptographic purposes, add a targeted suppression to keep CI clean without changing the semantics.

Suggested fix
 def hash_unit(unit):
-    dig = hashlib.md5(str(unit).encode('utf-8')).digest()
+    # Deterministic fixture hash for cross-SDK parity; not used for security.
+    dig = hashlib.md5(str(unit).encode('utf-8')).digest()  # noqa: S324
     return base64.urlsafe_b64encode(dig).rstrip(b'=')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def hash_unit(unit):
dig = hashlib.md5(str(unit).encode('utf-8')).digest()
return base64.urlsafe_b64encode(dig).rstrip(b'=')
def hash_unit(unit):
# Deterministic fixture hash for cross-SDK parity; not used for security.
dig = hashlib.md5(str(unit).encode('utf-8')).digest() # noqa: S324
return base64.urlsafe_b64encode(dig).rstrip(b'=')
🧰 Tools
🪛 Ruff (0.15.2)

[error] 9-9: Probable use of insecure hash functions in hashlib: md5

(S324)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_variant_assigner.py` around lines 8 - 10, The test helper uses
hashlib.md5 in function hash_unit to create deterministic fixtures, so add a
targeted Ruff suppression to silence S324: append an inline comment like "#
noqa: S324" to the hashlib.md5 call (the line assigning dig) and include a short
explanatory comment that this use is non-cryptographic / for deterministic
testing only; leave the rest of hash_unit unchanged.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
README.md (2)

20-25: ⚠️ Potential issue | 🟡 Minor

Still missing a fence language.

This block is still untyped, so markdownlint will keep flagging MD040.

Proposed README fix
-```
+```text
 setuptools~=60.2.0
 requests~=2.28.1
 urllib3~=1.26.12
 jsons~=1.6.3
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @README.md around lines 20 - 25, The fenced code block listing Python
packages (setuptools, requests, urllib3, jsons) is missing a fence language
which triggers markdownlint MD040; update that block by changing the opening
triple backticks to include a language token (e.g., "text" or "bash") so the
block becomes text (or bash) followed by the four package lines and the
closing ```, ensuring the fenced block around the
setuptools/requests/urllib3/jsons list is properly typed.


</details>

---

`462-483`: _⚠️ Potential issue_ | _🟠 Major_

**Use an asyncio-compatible await here.**

If `wait_until_ready_async()` returns a `concurrent.futures.Future`, Line 483 will fail at runtime because that object is not directly awaitable. In a FastAPI handler, wrap it for asyncio instead of blocking the event loop.  
  

<details>
<summary>Proposed README fix</summary>

```diff
+import asyncio
 from fastapi import FastAPI, Request
 from absmartly import ABsmartly, ContextConfig
 import uuid
@@
     ctx = sdk.create_context(context_config)
-    await ctx.wait_until_ready_async()
+    await asyncio.wrap_future(ctx.wait_until_ready_async())
```
</details>

```shell
#!/bin/bash
set -euo pipefail

# Verify what Context.wait_until_ready_async() returns.
rg -n -C3 'def wait_until_ready_async|concurrent\.futures\.Future|executor\.submit|return .*Future' --type py sdk

# Show the README call sites that use the method.
rg -n -C2 'wait_until_ready_async\(' README.md
```

Expected result: if `sdk/context.py` shows `wait_until_ready_async()` returning a `concurrent.futures.Future` or an executor-submitted future, the FastAPI example should use `asyncio.wrap_future(...)` rather than `await` directly.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 462 - 483, The README's FastAPI example awaits
ctx.wait_until_ready_async() which may return a concurrent.futures.Future (not
awaitable); update the example to use an asyncio-compatible await by importing
asyncio and awaiting asyncio.wrap_future(ctx.wait_until_ready_async()) (or
otherwise converting the returned Future to an awaitable), keeping the
surrounding ContextConfig, sdk.create_context and ctx usage unchanged.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In @README.md:

  • Around line 20-25: The fenced code block listing Python packages (setuptools,
    requests, urllib3, jsons) is missing a fence language which triggers
    markdownlint MD040; update that block by changing the opening triple backticks
    to include a language token (e.g., "text" or "bash") so the block becomes
    text (or bash) followed by the four package lines and the closing ```,
    ensuring the fenced block around the setuptools/requests/urllib3/jsons list is
    properly typed.
  • Around line 462-483: The README's FastAPI example awaits
    ctx.wait_until_ready_async() which may return a concurrent.futures.Future (not
    awaitable); update the example to use an asyncio-compatible await by importing
    asyncio and awaiting asyncio.wrap_future(ctx.wait_until_ready_async()) (or
    otherwise converting the returned Future to an awaitable), keeping the
    surrounding ContextConfig, sdk.create_context and ctx usage unchanged.

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: Organization UI

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `93da8944-767f-4098-bdfb-ff21e4a1e7a8`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 08d140cf9442bd90d6294f9374908001959b61da and d9d85442c26b8ecf7b52bc955c1c1089caaaadc1.

</details>

<details>
<summary>📒 Files selected for processing (1)</summary>

* `README.md`

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant