5 fix issue with geometry breaking from time to time#6
5 fix issue with geometry breaking from time to time#6
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
💤 Files with no reviewable changes (1)
📝 WalkthroughWalkthroughThe pull request refactors the numerical integration framework by consolidating algorithms into a dedicated integrators module, introduces FIRE (Fast Inertial Relaxation Engine) minimization support, restructures multiple function signatures across the codebase, adds geometry validation checks, expands configuration parameters for time-stepping and convergence, and establishes comprehensive logging infrastructure. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant VertexModel as VertexModel (Core)
participant Integrators as Integrators Module
participant Geo as Geo Object
participant Set as Set Object
User->>VertexModel: single_iteration()
alt integrator == 'fire'
VertexModel->>Integrators: initialize_fire(geo, c_set)
Integrators->>Geo: Setup FIRE state
Geo-->>Integrators: FIRE initialized
VertexModel->>Integrators: fire_minimization_loop(geo, c_set, dof, g, t, num_step)
loop Each FIRE iteration
Integrators->>Geo: Update positions/velocities
Integrators->>Integrators: compute_energies(geo, c_set)
Integrators->>Integrators: check_if_fire_converged(geo, forces, c_set, dy, v, iter)
Integrators->>Geo: Update FIRE state (velocity, dt)
end
Integrators-->>VertexModel: Converged or max iterations
VertexModel->>VertexModel: iteration_converged()
else integrator == 'newton_raphson' or other
VertexModel->>Integrators: newton_raphson(geo, ...)
Integrators->>Geo: Compute gradient & stiffness
Integrators->>Integrators: line_search(geo, dof, Set, g, dy, geo_n)
Integrators-->>VertexModel: Updated geo, gradient, energy
VertexModel->>VertexModel: post_newton_raphson() with convergence checks
end
VertexModel-->>User: Iteration complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present. |
|
@copilot this is a complex task, but these three tests are not passing and I don't know why. The model breaks in those situations where a cell has been remodelled and now it is an scutoid. I want them to be scutoids, but there is something wrong in the vertices (cell.Y) or the forces that makes the model go wild as seen in the tests.
|
|
@Pablo1990 I've opened a new pull request, #7, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/pyVertexModel/algorithm/vertexModel.py`:
- Around line 536-538: The code dereferences self.set.dt_tolerance directly
which breaks for legacy pickled Set objects and allows non‑positive values to
cause endless halving; fix by reading a validated tolerance first (e.g. tol =
getattr(self.set, "dt_tolerance", DEFAULT_DT_TOLERANCE)), clamp/replace any
non‑positive or otherwise invalid values (e.g. if tol is None or tol <= 0: tol =
DEFAULT_DT_TOLERANCE), then use that validated tol in the conditional that
references dt_tolerance (the if block that checks (self.set.dt / self.set.dt0) >
... and resets self.set.MaxIter). Define a clear DEFAULT_DT_TOLERANCE constant
or use an existing config value and update the condition to use tol instead of
self.set.dt_tolerance.
In `@Tests/tests.py`:
- Around line 26-31: The current check on set_test.OutputFolder only compares to
b'' and None, but scipy.io.loadmat may return '' (str) or b'' (bytes) so empty
values can slip through; update the logic that sets set_test.OutputFolder (for
the Set(...) result) to treat None, b'' and '' as empty by normalizing bytes to
str (or checking both types) and then assigning '../Result/Test' when the
normalized value is empty; reference Set and set_test.OutputFolder to locate the
code and ensure the check uses isinstance or explicit comparisons for both b''
and '' before assigning the default OutputFolder.
| if (self.set.iter >= self.set.MaxIter and | ||
| (self.set.dt / self.set.dt0) > 1e-6): | ||
| (self.set.dt / self.set.dt0) > self.set.dt_tolerance): | ||
| self.set.MaxIter = self.set.MaxIter0 |
There was a problem hiding this comment.
Guard dt_tolerance for legacy pickles / invalid values.
Line 536 now dereferences self.set.dt_tolerance; pickled Set objects created before this field existed will raise AttributeError, and non‑positive values can lead to endless halving. Consider a safe default + validation.
🛠️ Suggested fix
- if (self.set.iter >= self.set.MaxIter and
- (self.set.dt / self.set.dt0) > self.set.dt_tolerance):
+ dt_tol = getattr(self.set, "dt_tolerance", 1e-6)
+ if dt_tol <= 0:
+ logger.warning("Invalid dt_tolerance=%s; defaulting to 1e-6", dt_tol)
+ dt_tol = 1e-6
+ if (self.set.iter >= self.set.MaxIter and
+ (self.set.dt / self.set.dt0) > dt_tol):🤖 Prompt for AI Agents
In `@src/pyVertexModel/algorithm/vertexModel.py` around lines 536 - 538, The code
dereferences self.set.dt_tolerance directly which breaks for legacy pickled Set
objects and allows non‑positive values to cause endless halving; fix by reading
a validated tolerance first (e.g. tol = getattr(self.set, "dt_tolerance",
DEFAULT_DT_TOLERANCE)), clamp/replace any non‑positive or otherwise invalid
values (e.g. if tol is None or tol <= 0: tol = DEFAULT_DT_TOLERANCE), then use
that validated tol in the conditional that references dt_tolerance (the if block
that checks (self.set.dt / self.set.dt0) > ... and resets self.set.MaxIter).
Define a clear DEFAULT_DT_TOLERANCE constant or use an existing config value and
update the condition to use tol instead of self.set.dt_tolerance.
| if 'Set' in mat_info.keys(): | ||
| set_test = Set(mat_info['Set']) | ||
| if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None: | ||
| set_test.OutputFolder = '../Result/Test' | ||
| else: | ||
| set_test = None |
There was a problem hiding this comment.
Normalize empty OutputFolder values ('' vs b'').
scipy.io.loadmat can return '' (str) as well as b'' (bytes). If '' slips through, later OutputFolder is not None checks can still trigger filesystem writes to CWD. Consider a broader empty check.
🛠️ Suggested fix
- if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None:
- set_test.OutputFolder = '../Result/Test'
+ if (set_test.OutputFolder is None or
+ (isinstance(set_test.OutputFolder, (bytes, str)) and len(set_test.OutputFolder) == 0)):
+ set_test.OutputFolder = '../Result/Test'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if 'Set' in mat_info.keys(): | |
| set_test = Set(mat_info['Set']) | |
| if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None: | |
| set_test.OutputFolder = '../Result/Test' | |
| else: | |
| set_test = None | |
| if 'Set' in mat_info.keys(): | |
| set_test = Set(mat_info['Set']) | |
| if (set_test.OutputFolder is None or | |
| (isinstance(set_test.OutputFolder, (bytes, str)) and len(set_test.OutputFolder) == 0)): | |
| set_test.OutputFolder = '../Result/Test' | |
| else: | |
| set_test = None |
🤖 Prompt for AI Agents
In `@Tests/tests.py` around lines 26 - 31, The current check on
set_test.OutputFolder only compares to b'' and None, but scipy.io.loadmat may
return '' (str) or b'' (bytes) so empty values can slip through; update the
logic that sets set_test.OutputFolder (for the Set(...) result) to treat None,
b'' and '' as empty by normalizing bytes to str (or checking both types) and
then assigning '../Result/Test' when the normalized value is empty; reference
Set and set_test.OutputFolder to locate the code and ensure the check uses
isinstance or explicit comparisons for both b'' and '' before assigning the
default OutputFolder.
There was a problem hiding this comment.
Pull request overview
This PR aims to fix intermittent “geometry breaking / vertices going wild” failures by making time-step backoff configurable and by preventing file I/O during tests that load saved model states.
Changes:
- Introduces a configurable
dt_tolerance(used to decide whetherdtcan keep shrinking) and applies it in the non-convergence logic. - Makes
VertexModelskip VTK/image/state outputs whenset.OutputFolderisNone(enabling non-I/O tests). - Adds new regression tests + many new
.mat/.pklfixtures underTests_data/, and updates test data loading paths.
Reviewed changes
Copilot reviewed 4 out of 79 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| src/pyVertexModel/parameters/set.py | Adds dt_tolerance default and updates cyst fixture path. |
| src/pyVertexModel/algorithm/vertexModel.py | Gates output by OutputFolder and uses configurable dt_tolerance in time-step reduction. |
| Tests/tests.py | Extends load_data to support .pkl model states and updates test data paths. |
| Tests/test_vertexModel.py | Updates fixture paths and adds 3 new regression tests for “vertices going wild”. |
| Tests_data/* | Adds new fixtures required by updated and new tests. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| file_name = 'LblImg_imageSequence.mat' | ||
| test_dir = 'Tests/data/%s' % file_name | ||
| test_dir = 'Tests/Tests_data/%s' % file_name | ||
| if exists(test_dir): | ||
| Twg_test, X_test = vModel_test.obtain_initial_x_and_tetrahedra(test_dir) | ||
| else: | ||
| Twg_test, X_test = vModel_test.obtain_initial_x_and_tetrahedra('data/%s' % file_name) | ||
| Twg_test, X_test = vModel_test.obtain_initial_x_and_tetrahedra('Tests_data/%s' % file_name) |
There was a problem hiding this comment.
These tests reference Tests/Tests_data/..., but fixtures are under the repository root Tests_data/. Because the fallback uses a relative path, this can fail when tests aren't executed from the repo root. Prefer using an absolute path such as os.path.join(TEST_DIRECTORY, 'Tests_data', file_name) for both branches (and remove the exists() check entirely).
| file_name = 'voronoi_40cells.pkl' | ||
| test_dir = TEST_DIRECTORY + '/Tests/data/%s' % file_name | ||
| test_dir = TEST_DIRECTORY + '/Tests/Tests_data/%s' % file_name | ||
| if exists(test_dir): | ||
| vModel_test.set.initial_filename_state = test_dir | ||
| else: | ||
| vModel_test.set.initial_filename_state = 'data/%s' % file_name | ||
| vModel_test.set.initial_filename_state = 'Tests_data/%s' % file_name |
There was a problem hiding this comment.
test_dir is built as TEST_DIRECTORY + '/Tests/Tests_data/...', but the repository data directory is TEST_DIRECTORY/Tests_data. As written, the exists() check will always fail in a normal checkout, and the code relies on a relative 'Tests_data/...' path (CWD-dependent). Use os.path.join(TEST_DIRECTORY, 'Tests_data', file_name) consistently.
| # Run for 10 iterations. dt should not decrease to 1e-1 | ||
| vModel_test.set.tend = vModel_test.t + 20 * vModel_test.set.dt0 | ||
|
|
There was a problem hiding this comment.
The comment says “Run for 10 iterations”, but tend is set to t + 20 * dt0 (20 time steps at dt0). Either adjust the comment or the tend calculation so the test intent is unambiguous.
| if (self.set.iter >= self.set.MaxIter and | ||
| (self.set.dt / self.set.dt0) > 1e-6): | ||
| (self.set.dt / self.set.dt0) > self.set.dt_tolerance): |
There was a problem hiding this comment.
Backward-compatibility risk: self.set.dt_tolerance is a new attribute. If a VertexModel is restored from an older .pkl where the Set instance predates this field, iteration_did_not_converged will raise AttributeError. Consider using getattr(self.set, 'dt_tolerance', 1e-6) here (or ensure Set defines it in a post-load hook) to keep old saved states runnable.
| test_dir = abspath('Tests/Tests_data/%s' % file_name) | ||
| if file_name.endswith('.mat'): | ||
| if exists(test_dir): | ||
| mat_info = scipy.io.loadmat(test_dir) | ||
| else: | ||
| mat_info = scipy.io.loadmat('Tests_data/%s' % file_name) |
There was a problem hiding this comment.
load_data builds test_dir using abspath('Tests/Tests_data/...'), but the repository test fixtures live under the top-level Tests_data/ directory. This makes the helper depend on the current working directory and the fallback relative path, which can break when tests are run from a different CWD. Prefer building an absolute path from Tests.TEST_DIRECTORY (or Path(__file__).resolve()) and always loading from .../Tests_data/<file>.
| v_model.set.export_images = False | ||
| v_model.set.VTK = False | ||
|
|
||
| return v_model |
There was a problem hiding this comment.
load_data returns a (geo, set, mat_info) tuple for .mat files but returns a VertexModel... instance for .pkl files. This polymorphic return type is easy to misuse (callers can accidentally unpack a .pkl result and fail at runtime). Consider splitting into load_mat_data(...) and load_model_state(...), or always returning a consistent tuple (e.g., (None, None, v_model) or (geo, set, extra) with a documented contract).
| return v_model | |
| return None, None, v_model |
…tability Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@Tests/tests.py`:
- Around line 53-54: The raised FileNotFoundError for non-.mat/.pkl inputs is
misleading; change the exception to a more appropriate type (e.g., ValueError or
a custom UnsupportedFileTypeError) and update the message to indicate an
unsupported file extension including the offending file_name or its extension;
locate the raise statement that references file_name in Tests/tests.py and
replace it with a descriptive "Unsupported file type" exception and message.
🧹 Nitpick comments (3)
Tests/__init__.py (1)
5-7: TEST_DIRECTORY and PROJECT_DIRECTORY are computed identically.Both variables use the same expression
os.path.dirname(os.path.dirname(os.path.abspath(__file__))). If they serve different purposes, consider clarifying with comments or distinct logic. If PROJECT_DIRECTORY is meant to allow override via environment variable while TEST_DIRECTORY is the fixed path, the naming and intent could be documented.Tests/tests.py (2)
15-21: Path fallback logic may produce confusing errors for missing .mat files.If
test_dirdoesn't exist and the fallback path'Tests_data/%s'also doesn't exist,scipy.io.loadmatwill raise its ownFileNotFoundErrorwith the fallback path. This could be confusing since the error won't mention the primary path that was tried first.Consider adding an existence check for the fallback path or logging which path is being used:
♻️ Suggested improvement
def load_data(file_name, return_geo=True): test_dir = abspath('Tests/Tests_data/%s' % file_name) if file_name.endswith('.mat'): if exists(test_dir): mat_info = scipy.io.loadmat(test_dir) - else: + elif exists('Tests_data/%s' % file_name): mat_info = scipy.io.loadmat('Tests_data/%s' % file_name) + else: + raise FileNotFoundError('MAT file %s not found in Tests/Tests_data or Tests_data' % file_name)
40-45: Same fallback pattern without existence check for .pkl files.Similar to the .mat handling, if neither path exists for a .pkl file,
load_statewill fail with an error that may not clearly indicate both paths were tried.
| else: | ||
| geo_test = None | ||
| set_test = None | ||
| raise FileNotFoundError('File %s not found' % file_name) |
There was a problem hiding this comment.
Error message is misleading for unsupported file extensions.
FileNotFoundError with "not found" message is raised for files that don't end with .mat or .pkl, but this is actually an "unsupported file type" error, not a "file not found" error. This could mislead debugging efforts.
♻️ Suggested fix
else:
- raise FileNotFoundError('File %s not found' % file_name)
+ raise ValueError('Unsupported file type: %s (expected .mat or .pkl)' % file_name)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| else: | |
| geo_test = None | |
| set_test = None | |
| raise FileNotFoundError('File %s not found' % file_name) | |
| else: | |
| raise ValueError('Unsupported file type: %s (expected .mat or .pkl)' % file_name) |
🤖 Prompt for AI Agents
In `@Tests/tests.py` around lines 53 - 54, The raised FileNotFoundError for
non-.mat/.pkl inputs is misleading; change the exception to a more appropriate
type (e.g., ValueError or a custom UnsupportedFileTypeError) and update the
message to indicate an unsupported file extension including the offending
file_name or its extension; locate the raise statement that references file_name
in Tests/tests.py and replace it with a descriptive "Unsupported file type"
exception and message.
…nto copilot/sub-pr-6
For explicit (forward Euler) methods, don't restore backup geometry if the step keeps the gradient stable (< 1.5x increase). This allows the solver to make incremental progress even when the gradient is above tolerance. Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
Scale explicit step by tol/gr when gradient is large to prevent explosion. Accept steps that keep gradient stable (< 1.1x) even if not fully converged. Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
- Add MIN_SCALE_FACTOR=0.1 to prevent excessive iterations - Extract GRADIENT_STABILITY_FACTOR=1.1 as named constant - Initialize gr_before_step=0.0 for implicit method to avoid hasattr check Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
- Fixed off-by-one error in gr_before_step tracking - Add strict rejection of any step that increases gradient > 0.1% - Initialize gr_before_step=0 in __init__ - Always use conservative step scaling (SAFETY_FACTOR=0.5) - Store gradient AFTER step for next iteration comparison This prevents gradient explosion but may need tuning of SAFETY_FACTOR. Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
- Increase SAFETY_FACTOR from 0.5 → 0.9 to allow faster convergence - Increase GRADIENT_INCREASE_TOLERANCE from 1.001 → 1.01 (allow 1% growth) - Balance between preventing explosion and making progress Still investigating optimal values to pass test without explosion. Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
Tested approaches: 1. Fixed safety factor (0.5-0.9) - hits dt_tolerance 2. Adaptive based on gradient trend - still explodes 3. Conditional scaling only when increasing - explodes 4. Extra conservative for small gradients (0.6-0.75) - hits dt_tolerance All prevent explosion OR allow convergence, but not both simultaneously. Core issue: explicit Euler fundamentally unstable for this geometry. Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
Key findings: 1. Viscosity initially dominates (95%), but becomes zero after first step 2. Contractility, Surface, TriEnergyBarrierAR are main terms 3. No degenerate geometry - all cells/triangles have reasonable values 4. Safety factor (0.75) prevents explosion but slows convergence 5. Root cause: explicit Euler inherently unstable for this tissue state Proposed solution: Increase dt_tolerance or adjust safety factor balance. Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
…tion method Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
…ical method Co-authored-by: Pablo1990 <1974224+Pablo1990@users.noreply.github.com>
Configure test_weird_bug_should_not_happen to use FIRE algorithm with edge case tuning
…ng-from-time-to-time' into 5-fix-issue-with-geometry-breaking-from-time-to-time
|
@copilot why tests are not running? |
|
@Pablo1990 I've opened a new pull request, #26, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Actionable comments posted: 16
♻️ Duplicate comments (1)
Tests/tests.py (1)
27-30:⚠️ Potential issue | 🟡 MinorNormalize empty
OutputFoldervalues for bothstrandbytes.
scipy.io.loadmatcan return''as well asb''. The current check only catches the bytes case, so tests can still end up writing into the CWD when the loaded folder is an empty string.🛠️ Suggested fix
- if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None: + if ( + set_test.OutputFolder is None + or ( + isinstance(set_test.OutputFolder, (bytes, str)) + and len(set_test.OutputFolder) == 0 + ) + ): set_test.OutputFolder = '../Result/Test'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/tests.py` around lines 27 - 30, The code in Tests/tests.py only checks for a bytes-empty OutputFolder (b'') and misses the str-empty case (''), so normalize Set(mat_info['Set']).OutputFolder by treating None, b'' and '' as empty: after creating set_test (from Set(mat_info['Set'])), if set_test.OutputFolder is None or set_test.OutputFolder == b'' or set_test.OutputFolder == '' (or decode bytes and check empty) then assign '../Result/Test'; reference the Set class, mat_info['Set'] and the OutputFolder attribute when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/pyVertexModel/__init__.py`:
- Around line 20-24: The guard that skips adding a console handler currently
checks any(isinstance(handler, logging.StreamHandler) for handler in
logger.handlers) which will treat FileHandler as a StreamHandler and suppress
console output; update the check to detect an actual stdout/stderr console by
verifying handler is a StreamHandler and handler.stream is sys.stdout or
sys.stderr (e.g., any(isinstance(h, logging.StreamHandler) and getattr(h,
"stream", None) in (sys.stdout, sys.stderr) for h in logger.handlers)), ensure
sys is imported, then create and add console_handler (formatter and
logger.addHandler(console_handler)) only if no stdout/stderr handler exists
(references: logger, logger.handlers, logging.StreamHandler,
logging.FileHandler, console_handler, formatter).
In `@src/pyVertexModel/algorithm/integrators.py`:
- Around line 592-597: The block that treats exceeding fire_max_iterations as
convergence is incorrect: in the FIRE integrator (variables iteration_count,
max_iter, converged, reason, logger.warning, max_force) do not set
converged=True when iteration_count >= max_iter; instead mark it as not
converged (converged=False), set an appropriate reason like "Reached max
iterations" and return that non-converged status so callers know the timestep
did not meet force/displacement criteria; keep the logger.warning (including
max_force) to report the stall but ensure the returned converged flag is False.
- Line 76: The remodeling call incorrectly expects 9 return values from
newton_raphson, causing a ValueError; change the unpacking to match the current
8-value API (remove the extra placeholder) where newton_raphson is called (e.g.,
in the line with geo_0, geo_n, geo, dofs, c_set, k, g, -1, -1) so it assigns
exactly eight variables—geo, g, k, energy, c_set, gr, dyr, dy—and update any
tests that used the old 9-value unpacking to the same 8-variable form.
- Around line 640-645: In single_iteration_fire and fire_minimization_loop fix
the velocity/state handling: remove the unconditional reset of
geo._fire_velocity to zeros in single_iteration_fire so the hasattr(geo,
'_fire_velocity') guard actually preserves momentum, and in
fire_minimization_loop ensure initialize_fire() is called (i.e., create
_fire_dt, _fire_alpha, _fire_n_positive, _fire_iteration_count) before any
resets rather than resetting _fire_velocity before the hasattr check; also
replace any uses of ndarray.flatten() when writing back (e.g., velocity[:, :],
the sections at the flatten() usages around where velocities are updated) with a
reshape(...) or direct slicing that returns a view so writes persist to
geo._fire_velocity, and ensure initialize_fire() is the single place that
allocates the velocity array (np.zeros) and sets initial FIRE parameters.
In `@src/pyVertexModel/algorithm/vertexModel.py`:
- Around line 539-544: When integrators.fire_minimization_loop returns converged
== False, you must handle that failure before re-entering the timestep loop: in
single_iteration (the block using fire_minimization_loop) call
iteration_did_not_converged(...) with appropriate context (same args used
elsewhere), set the didNotConverge flag or increment the failure counter and
advance/modify self.t or self.numStep as the timestep-failure path in
iterate_over_time expects, and ensure you do not simply fall through without
changing state; mirror the failure-handling logic used in the non-FIRE branch
(see iteration_did_not_converged, iterate_over_time) so the timestep progresses
or the loop exits instead of infinite-spinning.
In `@src/pyVertexModel/analysis/analyse_simulation.py`:
- Around line 418-420: The code unconditionally forces v_model.set.integrator =
'euler', which overwrites any integrator saved in the serialized model (breaking
FIRE/RK2 stability fixes); instead, stop forcing Euler — either remove that
assignment or change it to only set the integrator if no value exists (e.g.,
check v_model.set.integrator is None/empty before assigning). Locate the lines
modifying v_model.set.integrator (alongside v_model.set.ablation and
v_model.set.export_images) and ensure the reload preserves the model's existing
integrator setting rather than hard-coding 'euler'.
In `@src/pyVertexModel/analysis/find_required_purse_string.py`:
- Around line 137-144: The code builds dirs_within from os.listdir and then uses
dirs_within[0], which is unstable; sort the filtered list before selecting the
first entry so selection is deterministic: after creating dirs_within (the list
comprehension that filters on 'no_Remodelling_ablating_'), call a stable sort
(e.g., dirs_within.sort() or sorted(...)) and then assign directory =
dirs_within[0] and proceed to the existing print/processing logic in
find_required_purse_string.py to ensure a consistent run is chosen.
- Around line 156-164: The code currently forces vModel.set.integrator = 'euler'
and vModel.set.dt_tolerance = 1e-1 immediately before calling
vModel.required_purse_string_strength(..., load_existing=True), which mixes
solver overrides with cached measurements; remove or guard these overrides so
they do not apply when loading existing results. Specifically, in the block
around vModel.set.integrator and vModel.set.dt_tolerance, either (A) delete
these assignments so the model uses its original solver/tolerance when
load_existing=True, or (B) only set them when calling
required_purse_string_strength with load_existing=False (or explicitly set
load_existing=False to force a recompute with the new integrator). Also, if you
must temporarily change settings for a recompute, save the original
vModel.set.integrator and vModel.set.dt_tolerance, set the new values, call
required_purse_string_strength with load_existing=False, then restore the
originals to avoid side effects.
- Around line 187-199: The block that appends to output_csv is re-writing
accumulated lists each outer-loop iteration because model_name, recoilings,
lambda_s1_list, lambda_s2_list, purse_string_strength_0 and time_list are
cumulative; fix by only writing the current-iteration rows: create a df_current
(e.g., from local per-iteration lists or by resetting those lists at start of
the loop) and then concat df_existing with df_current (not the full accumulated
df), using the existing variables df_existing, df_current (instead of df),
df_final and output_csv to locate the change; alternatively clear/reset
model_name, recoilings, lambda_s1_list, lambda_s2_list, purse_string_strength_0
and time_list before building df so each write only contains new rows.
In `@src/pyVertexModel/geometry/geo.py`:
- Around line 2313-2389: The validator currently only checks numeric comparisons
on cached measures (cell.Vol, face.Area, tri.Area) so NaN/Inf can pass; update
the checks in the alive_cells loops to explicitly reject non-finite values (use
math.isfinite or numpy.isfinite) for cell.Vol, face.Area, and tri.Area: log a
debug message and return False when any of those is not finite; also ensure the
tiny_triangle_count/total_triangles loop skips or rejects non-finite tri.Area
(don’t count them as tiny) so stale cached non-finite measures cannot make the
geometry validate as correct.
- Around line 2303-2332: The loops use variable names that shadow imported
modules (cell, face) and include unused enumerate indexes; rename the iteration
variables to avoid shadowing (e.g., use c_cell instead of cell and c_face
instead of face) and remove unused index variables where not needed (drop
cell_idx/face_idx or replace with underscore if index is used elsewhere), then
update all references inside the loops (e.g., c_cell.Y, c_cell.Vol,
c_cell.Faces, c_face) accordingly; apply the same renaming to the later loop
around lines referenced in the comment (the second faces loop covering
2369-2375) so Ruff F402/B007 warnings are resolved.
In `@src/pyVertexModel/util/utils.py`:
- Around line 697-699: The ylim adjustment is gated by an exact string match on
y_axis_label which misses dynamic labels like "Purse string strength (t=6.0)";
change the check in the utils plotting code that currently references
y_axis_label to detect the base metric instead (e.g., check if "Purse string
strength" is a substring or use a regex like r"^Purse string strength\s*\(t=")
and then call plt.ylim(0, 1.2e-3) when matched; update the condition so it works
for any t value/format (0.1, 6.0, etc.) rather than only the exact 'Purse string
strength (t=0.1)' string.
In `@TEST_FIRE_CONFIGURATION.md`:
- Around line 118-120: The documentation in TEST_FIRE_CONFIGURATION.md
incorrectly points the implementation reference to
src/pyVertexModel/algorithm/newtonRaphson.py; update that reference to
src/pyVertexModel/algorithm/integrators.py (the module that actually contains
the FIRE implementation) so readers are directed to the correct code path
(search for the line containing "Implementation:
`src/pyVertexModel/algorithm/newtonRaphson.py`" and replace it with the
integrators path).
In `@Tests/test_vertexModel.py`:
- Around line 487-519: Add explicit assertions that the geometry did not "go
wild" and that the timestep never collapsed: after calling
vModel_test.iterate_over_time(), assert that vertex positions are finite and
within expected bounds (use the model's vertex container, e.g.,
vModel_test.vertices or vModel_test.vertex_positions, and check for no NaN/Inf
and reasonable extents) and assert the minimum timestep ratio stayed above
threshold by computing min_dt_ratio = (vModel_test.min_dt_reached or
min(vModel_test.dt_history) if available) / vModel_test.set.dt0 and checking
min_dt_ratio > 0.1 (or another chosen threshold); keep the existing
self.assertFalse(vModel_test.didNotConverge) and add these two assertions in
test_weird_bug_should_not_happen (and mirror the same checks in the other
mentioned tests).
- Line 400: The test calls an undefined module newtonRaphson.KgGlobal; replace
that call with the correct integrators.KgGlobal invocation so the test uses the
imported module (change newtonRaphson.KgGlobal(...) to
integrators.KgGlobal(...)) while keeping the same returned variables (g_test,
K_test, energies_test, _), ensuring the test imports/uses the existing
integrators module and its KgGlobal function.
In `@Tests/tests.py`:
- Around line 14-19: The load_data() helper currently only checks
'Tests/Tests_data' and 'Tests_data' for .mat files; update its lookup to also
probe the canonical repo test-data directories 'Tests/data' and 'data' (in both
places where the logic appears around the existing test_dir/file_name checks),
trying each path in order and falling back appropriately before calling
scipy.io.loadmat so fixtures stored in Tests/data or data are found; ensure the
same change is applied to the duplicate block referenced by the second
occurrence (the other mat_info load branch).
---
Duplicate comments:
In `@Tests/tests.py`:
- Around line 27-30: The code in Tests/tests.py only checks for a bytes-empty
OutputFolder (b'') and misses the str-empty case (''), so normalize
Set(mat_info['Set']).OutputFolder by treating None, b'' and '' as empty: after
creating set_test (from Set(mat_info['Set'])), if set_test.OutputFolder is None
or set_test.OutputFolder == b'' or set_test.OutputFolder == '' (or decode bytes
and check empty) then assign '../Result/Test'; reference the Set class,
mat_info['Set'] and the OutputFolder attribute when making this change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 29ee34dd-9b73-408e-89f4-b16a6c8a97fb
⛔ Files ignored due to path filters (6)
Tests_data/geometry_correct_1.pklis excluded by!**/*.pklTests_data/geometry_correct_2.pklis excluded by!**/*.pklTests_data/geometry_correct_3.pklis excluded by!**/*.pklTests_data/vertices_going_wild_1.pklis excluded by!**/*.pklTests_data/vertices_going_wild_4.pklis excluded by!**/*.pkluv.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
FIRE_ALGORITHM_GUIDE.mdTEST_FIRE_CONFIGURATION.mdTests/test_geo.pyTests/test_kg.pyTests/test_newtonRaphson.pyTests/test_vertexModel.pyTests/tests.pypyproject.tomlsrc/pyVertexModel/__init__.pysrc/pyVertexModel/algorithm/integrators.pysrc/pyVertexModel/algorithm/vertexModel.pysrc/pyVertexModel/analysis/analyse_simulation.pysrc/pyVertexModel/analysis/analyse_simulations.pysrc/pyVertexModel/analysis/find_required_purse_string.pysrc/pyVertexModel/geometry/geo.pysrc/pyVertexModel/mesh_remodelling/remodelling.pysrc/pyVertexModel/parameters/set.pysrc/pyVertexModel/util/utils.py
| # Check if logger already has a console handler to avoid adding multiple handlers | ||
| if not any(isinstance(handler, logging.StreamHandler) for handler in logger.handlers): | ||
| console_handler = logging.StreamHandler() | ||
| console_handler.setFormatter(formatter) | ||
| logger.addHandler(console_handler) |
There was a problem hiding this comment.
This guard also suppresses the console when only a file handler is installed.
logging.FileHandler is a StreamHandler, so any(isinstance(..., logging.StreamHandler)) will skip the console handler whenever an app configures file logging first. Check for an actual stdout/stderr console handler instead of any StreamHandler.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pyVertexModel/__init__.py` around lines 20 - 24, The guard that skips
adding a console handler currently checks any(isinstance(handler,
logging.StreamHandler) for handler in logger.handlers) which will treat
FileHandler as a StreamHandler and suppress console output; update the check to
detect an actual stdout/stderr console by verifying handler is a StreamHandler
and handler.stream is sys.stdout or sys.stderr (e.g., any(isinstance(h,
logging.StreamHandler) and getattr(h, "stream", None) in (sys.stdout,
sys.stderr) for h in logger.handlers)), ensure sys is imported, then create and
add console_handler (formatter and logger.addHandler(console_handler)) only if
no stdout/stderr handler exists (references: logger, logger.handlers,
logging.StreamHandler, logging.FileHandler, console_handler, formatter).
| f'dt/dt0={c_set.dt / c_set.dt0:.3g}') | ||
|
|
||
| geo, g, k, energy, c_set, gr, dyr, dy = newton_raphson(geo_0, geo_n, geo, dofs, c_set, k, g, -1, -1) | ||
| geo, g, k, energy, c_set, gr, dyr, dy, _ = newton_raphson(geo_0, geo_n, geo, dofs, c_set, k, g, -1, -1) |
There was a problem hiding this comment.
Keep the remodeling solve on the current 8-value newton_raphson() API.
newton_raphson() below still returns 8 items. The extra placeholder here makes this path fail immediately with a ValueError before any convergence handling runs. The updated Newton-Raphson tests appear to have the same mismatch.
Suggested fix
- geo, g, k, energy, c_set, gr, dyr, dy, _ = newton_raphson(geo_0, geo_n, geo, dofs, c_set, k, g, -1, -1)
+ geo, g, k, energy, c_set, gr, dyr, dy = newton_raphson(geo_0, geo_n, geo, dofs, c_set, k, g, -1, -1)🧰 Tools
🪛 Ruff (0.15.6)
[warning] 76-76: Unpacked variable energy is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pyVertexModel/algorithm/integrators.py` at line 76, The remodeling call
incorrectly expects 9 return values from newton_raphson, causing a ValueError;
change the unpacking to match the current 8-value API (remove the extra
placeholder) where newton_raphson is called (e.g., in the line with geo_0,
geo_n, geo, dofs, c_set, k, g, -1, -1) so it assigns exactly eight
variables—geo, g, k, energy, c_set, gr, dyr, dy—and update any tests that used
the old 9-value unpacking to the same 8-variable form.
| # Check maximum iterations first | ||
| if iteration_count >= max_iter: | ||
| converged = True | ||
| reason = f"Reached max iterations ({max_iter})" | ||
| logger.warning(f"FIRE: {reason} - maxF={max_force:.3e}") | ||
| return converged, reason |
There was a problem hiding this comment.
Do not treat fire_max_iterations exhaustion as convergence.
Returning converged=True here causes callers to commit a timestep that never satisfied the force/displacement criteria. That hides stalled minimizations as successful FIRE steps.
Suggested fix
if iteration_count >= max_iter:
- converged = True
+ converged = False
reason = f"Reached max iterations ({max_iter})"
logger.warning(f"FIRE: {reason} - maxF={max_force:.3e}")
return converged, reason🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pyVertexModel/algorithm/integrators.py` around lines 592 - 597, The block
that treats exceeding fire_max_iterations as convergence is incorrect: in the
FIRE integrator (variables iteration_count, max_iter, converged, reason,
logger.warning, max_force) do not set converged=True when iteration_count >=
max_iter; instead mark it as not converged (converged=False), set an appropriate
reason like "Reached max iterations" and return that non-converged status so
callers know the timestep did not meet force/displacement criteria; keep the
logger.warning (including max_force) to report the stall but ensure the returned
converged flag is False.
| if not hasattr(geo, '_fire_velocity'): | ||
| # Initialize velocity to zero | ||
| initialize_fire(geo, c_set) | ||
| logger.info("FIRE algorithm initialized") | ||
|
|
||
| geo._fire_velocity = np.zeros((geo.numF + geo.numY + geo.nCells, 3)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
rg -n "_fire_velocity = np.zeros|flatten\\(\\)\\[dof\\]" src/pyVertexModel/algorithm/integrators.py
python - <<'PY'
import numpy as np
v = np.zeros((2, 3))
flat = v.flatten()
flat[0] = 1.0
print("original_after_flatten_write:", v.ravel().tolist())
print("flatten_copy_after_write:", flat.tolist())
PYRepository: Pablo1990/pyVertexModel
Length of output: 440
🏁 Script executed:
#!/bin/bash
# Examine the hasattr guard and initialization (lines 630-660)
sed -n '630,660p' src/pyVertexModel/algorithm/integrators.py
# Examine the flatten write issue (lines 700-720)
sed -n '700,720p' src/pyVertexModel/algorithm/integrators.py
# Examine fire_minimization_loop behavior (lines 760-790)
sed -n '760,790p' src/pyVertexModel/algorithm/integrators.py
# Find the initialize_fire function to understand what state it sets
rg -n "def initialize_fire" src/pyVertexModel/algorithm/integrators.pyRepository: Pablo1990/pyVertexModel
Length of output: 3073
🏁 Script executed:
#!/bin/bash
# Check what initialize_fire actually does and what attributes it sets
rg -A 15 "def initialize_fire" src/pyVertexModel/algorithm/integrators.pyRepository: Pablo1990/pyVertexModel
Length of output: 363
Velocity state is lost between FIRE iterations due to zeroing on every iteration and non-persistent flatten() writes.
In single_iteration_fire(), line 645 unconditionally resets _fire_velocity to zeros on every iteration, destroying the momentum state that FIRE depends on. Additionally, the flatten() indexing on line 714 writes to a copy (not a view), so velocity updates never persist to the original array. The hasattr guard on line 640 correctly prevents re-initialization, but the unconditional zero-reset negates that protection.
In fire_minimization_loop(), line 774 resets _fire_velocity before the hasattr check on line 776, ensuring the check always sees the attribute as present and skipping the else block that calls initialize_fire(). This prevents fresh geometries that call fire_minimization_loop() directly from receiving _fire_dt, _fire_alpha, _fire_n_positive, or _fire_iteration_count.
The flatten() pattern (lines 658, 714) must use reshape or a direct view to ensure writes persist.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pyVertexModel/algorithm/integrators.py` around lines 640 - 645, In
single_iteration_fire and fire_minimization_loop fix the velocity/state
handling: remove the unconditional reset of geo._fire_velocity to zeros in
single_iteration_fire so the hasattr(geo, '_fire_velocity') guard actually
preserves momentum, and in fire_minimization_loop ensure initialize_fire() is
called (i.e., create _fire_dt, _fire_alpha, _fire_n_positive,
_fire_iteration_count) before any resets rather than resetting _fire_velocity
before the hasattr check; also replace any uses of ndarray.flatten() when
writing back (e.g., velocity[:, :], the sections at the flatten() usages around
where velocities are updated) with a reshape(...) or direct slicing that returns
a view so writes persist to geo._fire_velocity, and ensure initialize_fire() is
the single place that allocates the velocity array (np.zeros) and sets initial
FIRE parameters.
| if self.set.integrator == 'fire': | ||
| self.geo, converged, gr = integrators.fire_minimization_loop(self.geo, self.set, self.Dofs.Free, g, self.t, | ||
| self.numStep) | ||
| if converged: | ||
| self.iteration_converged() | ||
| else: |
There was a problem hiding this comment.
Handle the FIRE failure path before re-entering the timestep loop.
When fire_minimization_loop() returns converged=False, single_iteration() neither advances t nor routes through failure handling. The FIRE branch in iteration_did_not_converged() also only logs, so iterate_over_time() can spin forever on the same timestep with didNotConverge == False.
Suggested fix
if self.set.integrator == 'fire':
self.geo, converged, gr = integrators.fire_minimization_loop(self.geo, self.set, self.Dofs.Free, g, self.t,
self.numStep)
if converged:
self.iteration_converged()
+ elif post_operations:
+ self.iteration_did_not_converged()
else:
self.geo, g, __, __, self.set, gr, dyr, dy = integrators.newton_raphson(self.geo_0, self.geo_n, self.geo,
self.Dofs, self.set, K, g,
self.numStep, self.t,
self.set.implicit_method)
@@
else:
logger.warning("FIRE did not converge")
+ self.didNotConverge = TrueAlso applies to: 586-603
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pyVertexModel/algorithm/vertexModel.py` around lines 539 - 544, When
integrators.fire_minimization_loop returns converged == False, you must handle
that failure before re-entering the timestep loop: in single_iteration (the
block using fire_minimization_loop) call iteration_did_not_converged(...) with
appropriate context (same args used elsewhere), set the didNotConverge flag or
increment the failure counter and advance/modify self.t or self.numStep as the
timestep-failure path in iterate_over_time expects, and ensure you do not simply
fall through without changing state; mirror the failure-handling logic used in
the non-FIRE branch (see iteration_did_not_converged, iterate_over_time) so the
timestep progresses or the loop exits instead of infinite-spinning.
| if y_axis_label == 'Purse string strength (t=' + str(0.1) + ')': | ||
| plt.ylim(0, 1.2e-3) | ||
|
|
There was a problem hiding this comment.
Fragile ylim rule due to exact label matching (Line 697).
This branch only matches Purse string strength (t=0.1) exactly, but call sites generate dynamic labels (including t=6.0), so the rule is silently skipped for other valid cases.
Suggested fix
- if y_axis_label == 'Purse string strength (t=' + str(0.1) + ')':
- plt.ylim(0, 1.2e-3)
+ if y_axis_name == 'Purse_string_strength' and y_axis_label.startswith('Purse string strength (t='):
+ # Keep this specific cap only if that's the intended behavior for all purse-string plots.
+ # If only some timepoints should be capped, key this off a parsed timepoint instead of label text.
+ plt.ylim(0, 1.2e-3)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pyVertexModel/util/utils.py` around lines 697 - 699, The ylim adjustment
is gated by an exact string match on y_axis_label which misses dynamic labels
like "Purse string strength (t=6.0)"; change the check in the utils plotting
code that currently references y_axis_label to detect the base metric instead
(e.g., check if "Purse string strength" is a substring or use a regex like
r"^Purse string strength\s*\(t=") and then call plt.ylim(0, 1.2e-3) when
matched; update the condition so it works for any t value/format (0.1, 6.0,
etc.) rather than only the exact 'Purse string strength (t=0.1)' string.
TEST_FIRE_CONFIGURATION.md
Outdated
| - Original FIRE paper: Bitzek et al., Phys. Rev. Lett. 97, 170201 (2006) | ||
| - Implementation: `src/pyVertexModel/algorithm/newtonRaphson.py` | ||
| - Documentation: `FIRE_ALGORITHM_GUIDE.md` |
There was a problem hiding this comment.
The implementation reference points to the wrong module.
FIRE in this PR is described as living under src/pyVertexModel/algorithm/integrators.py, so pointing readers to newtonRaphson.py will send debugging and tuning work to the wrong code path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@TEST_FIRE_CONFIGURATION.md` around lines 118 - 120, The documentation in
TEST_FIRE_CONFIGURATION.md incorrectly points the implementation reference to
src/pyVertexModel/algorithm/newtonRaphson.py; update that reference to
src/pyVertexModel/algorithm/integrators.py (the module that actually contains
the FIRE implementation) so readers are directed to the correct code path
(search for the line containing "Implementation:
`src/pyVertexModel/algorithm/newtonRaphson.py`" and replace it with the
integrators path).
|
|
||
| g_test, K_test, energies_test, _ = newtonRaphson.KgGlobal(vModel_test.geo_0, vModel_test.geo, vModel_test.geo, | ||
| vModel_test.set) | ||
| g_test, K_test, energies_test, _ = newtonRaphson.KgGlobal(vModel_test.geo, vModel_test.set, vModel_test.geo) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
python - <<'PY'
import ast
from pathlib import Path
path = Path("Tests/test_vertexModel.py")
tree = ast.parse(path.read_text())
defined = set()
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for alias in node.names:
defined.add(alias.asname or alias.name.split(".")[0])
elif isinstance(node, ast.ImportFrom):
for alias in node.names:
defined.add(alias.asname or alias.name)
elif isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):
defined.add(node.name)
loaded = {
node.id
for node in ast.walk(tree)
if isinstance(node, ast.Name) and isinstance(node.ctx, ast.Load)
}
print("newtonRaphson imported_or_defined:", "newtonRaphson" in defined)
print("newtonRaphson referenced:", "newtonRaphson" in loaded)
PYRepository: Pablo1990/pyVertexModel
Length of output: 138
🏁 Script executed:
#!/bin/bash
# Check imports at the top of the test file
head -30 Tests/test_vertexModel.pyRepository: Pablo1990/pyVertexModel
Length of output: 1241
🏁 Script executed:
#!/bin/bash
# Look for KgGlobal in the integrators module
find . -name "integrators.py" -o -name "integrators" -type d | head -5Repository: Pablo1990/pyVertexModel
Length of output: 111
🏁 Script executed:
#!/bin/bash
# Check the exact context around line 400
sed -n '395,405p' Tests/test_vertexModel.pyRepository: Pablo1990/pyVertexModel
Length of output: 479
🏁 Script executed:
#!/bin/bash
# Search for KgGlobal in the integrators module
rg "def KgGlobal|KgGlobal =" src/pyVertexModel/algorithm/integrators.py -A 5Repository: Pablo1990/pyVertexModel
Length of output: 245
🏁 Script executed:
#!/bin/bash
# Check if KgGlobal is in the newton_raphson submodule
rg "def KgGlobal|KgGlobal =" src/pyVertexModel/algorithm/ -B 2 -A 3Repository: Pablo1990/pyVertexModel
Length of output: 473
Use integrators.KgGlobal() instead of undefined newtonRaphson.KgGlobal().
Line 400 references newtonRaphson, which is not imported or defined in this file. The integrators module is imported at line 10 and contains KgGlobal.
Suggested fix
- g_test, K_test, energies_test, _ = newtonRaphson.KgGlobal(vModel_test.geo, vModel_test.set, vModel_test.geo)
+ g_test, K_test, energies_test, _ = integrators.KgGlobal(vModel_test.geo, vModel_test.set, vModel_test.geo)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| g_test, K_test, energies_test, _ = newtonRaphson.KgGlobal(vModel_test.geo, vModel_test.set, vModel_test.geo) | |
| g_test, K_test, energies_test, _ = integrators.KgGlobal(vModel_test.geo, vModel_test.set, vModel_test.geo) |
🧰 Tools
🪛 Ruff (0.15.6)
[warning] 400-400: Unpacked variable energies_test is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
[error] 400-400: Undefined name newtonRaphson
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Tests/test_vertexModel.py` at line 400, The test calls an undefined module
newtonRaphson.KgGlobal; replace that call with the correct integrators.KgGlobal
invocation so the test uses the imported module (change
newtonRaphson.KgGlobal(...) to integrators.KgGlobal(...)) while keeping the same
returned variables (g_test, K_test, energies_test, _), ensuring the test
imports/uses the existing integrators module and its KgGlobal function.
| def test_weird_bug_should_not_happen(self): | ||
| """ | ||
| Test for a weird bug that should not happen. | ||
| Uses FIRE algorithm for stable, fast convergence. | ||
| :return: | ||
| """ | ||
| # Load data | ||
| vModel_test = load_data('vertices_going_wild_1.pkl') | ||
|
|
||
| # Run for 20 iterations. dt should not decrease to 1e-1 | ||
| vModel_test.set.tend = vModel_test.t + 20 * vModel_test.set.dt0 | ||
|
|
||
| # Update tolerance | ||
| vModel_test.set.dt_tolerance = 0.25 | ||
|
|
||
| # Use FIRE algorithm for better stability and convergence | ||
| vModel_test.set.integrator = 'fire' | ||
|
|
||
| # Initialize FIRE parameters tuned for edge case geometries | ||
| # More lenient parameters to handle near-zero power cases | ||
| vModel_test.set.fire_dt_max = 10 * vModel_test.set.dt0 | ||
| vModel_test.set.fire_dt_min = 0.1 * vModel_test.set.dt0 # Higher minimum (was 0.02) | ||
| vModel_test.set.fire_N_min = 3 # Accelerate sooner (was 5) | ||
| vModel_test.set.fire_f_inc = 1.2 # Faster acceleration (was 1.1) | ||
| vModel_test.set.fire_f_dec = 0.7 # Less aggressive decrease (was 0.5) | ||
| vModel_test.set.fire_alpha_start = 0.15 # More damping initially (was 0.1) | ||
| vModel_test.set.fire_f_alpha = 0.98 # Slower damping reduction (was 0.99) | ||
|
|
||
| # Run the model | ||
| vModel_test.iterate_over_time() | ||
|
|
||
| # Check if it did not converge | ||
| self.assertFalse(vModel_test.didNotConverge) |
There was a problem hiding this comment.
Assert the runaway-geometry regression directly.
These tests only check didNotConverge, but the bug report is about vertices going wild after remodelling. That can still regress while this flag stays false. Please also assert that the post-step geometry is valid, and if timestep collapse matters here, track the minimum dt / dt0 reached during the run instead of only the final boolean.
Suggested direction
# Run the model
vModel_test.iterate_over_time()
- # Check if it did not converge
- self.assertFalse(vModel_test.didNotConverge)
+ assert not vModel_test.didNotConverge
+ assert vModel_test.geo.geometry_is_correct()Also applies to: 521-539, 541-559
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Tests/test_vertexModel.py` around lines 487 - 519, Add explicit assertions
that the geometry did not "go wild" and that the timestep never collapsed: after
calling vModel_test.iterate_over_time(), assert that vertex positions are finite
and within expected bounds (use the model's vertex container, e.g.,
vModel_test.vertices or vModel_test.vertex_positions, and check for no NaN/Inf
and reasonable extents) and assert the minimum timestep ratio stayed above
threshold by computing min_dt_ratio = (vModel_test.min_dt_reached or
min(vModel_test.dt_history) if available) / vModel_test.set.dt0 and checking
min_dt_ratio > 0.1 (or another chosen threshold); keep the existing
self.assertFalse(vModel_test.didNotConverge) and add these two assertions in
test_weird_bug_should_not_happen (and mirror the same checks in the other
mentioned tests).
| test_dir = abspath('Tests/Tests_data/%s' % file_name) | ||
| if file_name.endswith('.mat'): | ||
| if exists(test_dir): | ||
| mat_info = scipy.io.loadmat(test_dir) | ||
| else: | ||
| mat_info = scipy.io.loadmat('Tests_data/%s' % file_name) |
There was a problem hiding this comment.
load_data() no longer searches the repo’s canonical test-data folders.
The helper now only probes Tests/Tests_data and Tests_data. Fixtures that follow the repo convention under Tests/data or data will fail to load even when they exist.
Based on learnings: Load test data with load_data() helper function which expects .mat files in Tests/data/ or data/ directories
Also applies to: 38-43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Tests/tests.py` around lines 14 - 19, The load_data() helper currently only
checks 'Tests/Tests_data' and 'Tests_data' for .mat files; update its lookup to
also probe the canonical repo test-data directories 'Tests/data' and 'data' (in
both places where the logic appears around the existing test_dir/file_name
checks), trying each path in order and falling back appropriately before calling
scipy.io.loadmat so fixtures stored in Tests/data or data are found; ensure the
same change is applied to the duplicate block referenced by the second
occurrence (the other mat_info load branch).
|
@copilot Verify each finding against the current code and only fix it if needed. Inline comments:
In
In
In
In
In
In
In
In
In
Duplicate comments:
|
|
@Pablo1990 I've opened a new pull request, #27, to work on those changes. Once the pull request is ready, I'll request review from you. |
I have prepared some tests that should pass, but they are not passing:
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Tests
Chores