edge2torch¶
Build sparse PyTorch neural networks from edge lists of named nodes, with optional feature- and node-level attribution.
Overview¶
edge2torch is an edge-list-to-PyTorch compiler for sparse neural network
architectures with named nodes.
Define a model architecture as an edge list, compile it into a minimally opinionated PyTorch model, train it with standard PyTorch tools, and optionally map model behavior back to the named nodes and features that defined the architecture.
The package is designed for users who want to build sparse or structured neural networks from a predefined graph rather than manually wiring PyTorch modules. It is domain-agnostic: any setting where a neural architecture can be represented as named edges can use the same graph-to-model abstraction.
Here, "graph" means the architecture specification, not necessarily a graph neural network. Feedforward models, recurrent models, and graph neural networks can all be represented by edge lists when their architecture is defined through directed connections between named nodes.
A major application area is knowledge-primed neural networks (KPNNs), where prior knowledge defines the model structure. In biology, for example, edge lists may connect genes, transcription factors, pathways, kinases, or other biological entities. The same approach can also apply in domains such as chemistry or other fields with graph-structured prior knowledge.
edge2torch deliberately leaves training loops, losses, optimizers,
task-specific heads, and advanced customization to standard PyTorch.
Core workflow¶
The package is built around four main steps:
- Define a model architecture as an edge list with named
sourceandtargetnodes. - Compile the edge list into a backend-specific PyTorch model with
compile_graph(). - Align named input data features to the compiled model input nodes with
align_features_to_input_nodes(). - Customize, train, and interpret the model with ordinary PyTorch,
customize_model(), andinterpret_model().
Main public API¶
The current public API is centered on:
compile_graph()align_features_to_input_nodes()customize_model()interpret_model()
Package philosophy¶
edge2torch is intentionally minimally opinionated.
It defines the structural semantics required to compile a graph into a neural network backend, but it does not impose broader modeling choices such as:
- activation functions
- output heads
- dropout
- loss functions
- optimizers
- training loops
These remain part of the normal PyTorch workflow.
This keeps the package small in scope:
edge2torchhandles graph compilation- PyTorch handles model training
edge2torchmaps trained models back to interpretable named entities
Supported backends¶
compile_graph() currently supports:
feedforwardrecurrentgraphnn
These backends share the same edge-list input format but differ in how the graph structure is translated into neural-network computation.
Feature attribution is available through Captum-based methods. Feedforward models also support broad node-level attribution. Recurrent and graph neural network backends can be compiled and trained, while node-level interpretation for these backends is planned for a future release.
See the Backends page for details.
Start here¶
If you are new to the package, start with:
- Installation for package setup and optional extras
- Getting started for a full end-to-end example
- Feedforward skip edges for how non-adjacent feedforward edges are handled
- Backends for backend semantics and current support
- API reference for function-level documentation
License¶
This project is licensed under the MIT License. See the LICENSE file on GitHub for details.