Open
Conversation
- Add pyproject.toml with package metadata, dependencies, and CLI entry point - Extract CLI logic into pageindex/cli.py with a proper main() function - Simplify run_pageindex.py to delegate to pageindex.cli:main - Add build artifacts to .gitignore Closes VectifyAI#103
- Add --base-url CLI flag for custom API endpoints - Thread base_url through all LLM call functions in page_index.py, page_index_md.py, and utils.py - Add base_url to config.yaml defaults - Fix tiktoken fallback to cl100k_base for non-OpenAI model names Closes VectifyAI#115 Usage with Ollama: pageindex --pdf_path doc.pdf --model llama3.1 --base-url http://localhost:11434/v1
Author
|
Just flagging that PR #135 takes a very similar approach, both were independently developed and are functionally equivalent. Happy to close this in favour of #135 if that's easier to review, or to address any feedback here if you'd prefer this |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #115
What this does
--base-urlCLI flag for custom OpenAI-compatible API endpointsbase_urlthrough all LLM call functions inpage_index.py,page_index_md.py, andutils.pybase_url: nulltoconfig.yamldefaults (no behaviour change for existing users)tiktokento fall back tocl100k_basefor non-OpenAI model names (previously crashed withKeyError)Usage with Ollama
Works with any OpenAI-compatible provider
AWS Bedrock, Azure OpenAI, LM Studio, Mistral, etc. — anything that exposes an OpenAI-compatible endpoint.