Skip to content

fix: resolve scalar parameter types from graph context during inference#189

Open
siddiquifaras wants to merge 4 commits intoneuromorphs:mainfrom
siddiquifaras:fix/scalar-type-inference
Open

fix: resolve scalar parameter types from graph context during inference#189
siddiquifaras wants to merge 4 commits intoneuromorphs:mainfrom
siddiquifaras:fix/scalar-type-inference

Conversation

@siddiquifaras
Copy link
Collaborator

Scalar neuron parameters (0-d arrays) produce empty type annotations that fail type checking. When infer_types() detects a type mismatch where the post node has scalar-derived types (size == 0), it now adopts the predecessor's output type instead of raising an error. Parameters remain untouched -- only type annotations are resolved.

Also guards serialization against h5py's inability to gzip-compress scalar (0-d) datasets.

Closes #176

Scalar neuron parameters (0-d arrays) produce empty type annotations that
fail type checking. When infer_types() detects a type mismatch where the
post node has scalar-derived types (size == 0), it now adopts the
predecessor's output type instead of raising an error. Parameters remain
untouched -- only type annotations are resolved.

Also guards serialization against h5py's inability to gzip-compress
scalar (0-d) datasets.

Closes neuromorphs#176
Copy link
Collaborator

@Jegp Jegp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent addition, thanks.

I'd like to hear your input on the issue with copying over the output dictionary for input ports. It would also be helpful to have a test where we have nodes with multiple outputs/inputs, to make sure that part is well behaved.

)
if is_scalar_type:
post_node.input_type = {
k.replace("output", "input"): v.copy()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me make sure I'm understanding this: you're taking the dict pre_node.output.items(), and then replacing the "output" node with "input" copying v as the value. Is this a good strategy? I get the idea that we want to copy the output parameter, but, in principle, one node can have multiple outputs.

I'm not sure I have a great solution for this, but it's worth thinking a bit about. Would it be an option to build a new dictionary that only aligns the input? It's a bit of a headache because the ports are slightly underspecified (i. e. people can (ab)use them in whichever way), but I'd like to avoid constraining that too much if it's not necessary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. The k.replace("output", "input") pattern is consistent with the existing implementation in infer_types() (lines 316–318), which uses the same approach for undefined input types.

1. Current NIR implementation

All nodes use single-port naming ({"input": ...}, {"output": ...}), and multi port nodes currently raise NotImplementedError in check_types() (lines 216–218). In this context, the string replacement works reliably.

2. Future multi port support

When multi port nodes are introduced, NIR edges connect entire nodes (all ports to all ports), not individual ports. The type check at line 307 enforces port count consistency:

len(post_node.input_type) != len(pre_node.output_type)

Copying the full output_type dict to input_type therefore remains correct.

3. Port naming convention

The approach also extends to indexed ports (e.g., output_0 → input_0, output_1 → input_1) as long as the naming convention retains the "output" / "input" substrings.

i've added a test (test_scalar_lif_port_naming) to confirm that port naming is preserved and to document this behavior for future multi port support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix: Scalar Parameter Broadcasting Forces Inefficient Weight Duplication

2 participants