Refactor: Generalize LLM Provider Architecture and Introduce Abstract Base Interface#136
Open
vinayakkushwah01 wants to merge 1 commit intoVectifyAI:mainfrom
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR Title
Refactor: Generalize LLM Provider Architecture and Introduce Abstract Base Interface
Overview
This PR removes the direct dependency on OpenAI and generalizes the LLM integration layer to support multiple providers through a unified architecture.
The refactor introduces a provider factory pattern, a shared abstract base class, standardized method naming, a normalized response schema, and environment-based configuration.
The goal is to make the system provider-agnostic, easier to extend, and more maintainable.
Key Changes
1. Generalized Provider Support
Implemented a factory-based approach to dynamically instantiate supported LLM providers based on configuration.
Supported providers:
openaigeminianthropicgroqaws-bedrockopen-routerThe
PROVIDERenvironment variable must be set to one of the above values. Any other value will raise an unsupported provider error.This removes hardcoded OpenAI logic and centralizes provider selection into a single abstraction layer.
2. Introduced Base Abstract LLM Interface
Added a
BaseLLMabstract class using Python’sabcmodule.All providers must now implement:
generate(...)agenerate(...)This ensures:
3. Standardized Method Naming
Renamed provider-specific method names to standardized LLM-neutral method names.
All generation logic now routes through:
generateagenerateThis removes provider assumptions from higher-level modules and ensures consistent invocation patterns.
4. Defined Unified Response Schema
Introduced a normalized response schema to standardize outputs across all providers.
Benefits:
5. Environment-Based Configuration
Provider selection and model configuration are now driven entirely through environment variables.
Required variables:
API_KEYPROVIDER(allowed values listed above)MODELEMBEDDING_MODELRecommended embedding configuration:
EMBEDDING_MODEL = "cl100k_base"This embedding tokenizer provides reliable compatibility for most general-purpose LLM workflows.
Breaking Changes
BaseLLMinterface.PROVIDERmust match one of the explicitly allowed values.Benefits
Testing
Validated across all supported providers:
Summary
This refactor transitions the system from a single-provider implementation to a scalable multi-provider architecture. It enforces a clear abstraction layer, standardizes interfaces, and prepares the system for long-term extensibility.