Quick Start Guide
Get up and running with GGUF Loader in just a few minutes
Get up and running with GGUF Loader 2.0.0 in just a few minutes! New to GGUF Loader? Check out the homepage to see what makes it special.
🚀 Installation
pip install ggufloader
Need detailed installation instructions? Check out our Installation Guide for platform-specific instructions and troubleshooting.
🎯 Launch GGUF Loader
ggufloader
This opens GGUF Loader with the Smart Floating Assistant addon already loaded.
📥 Load Your First Model
Step 1: Get a GGUF Model
Download a GGUF model from one of these sources:
Popular Model Sources:
- Hugging Face: https://huggingface.co/models?library=gguf
- TheBloke’s Models: Search for “TheBloke” on Hugging Face
- Local Models: Convert your own models to GGUF format
Recommended Starter Models:
- Small (2-4GB):
llama-2-7b-chat.Q4_K_M.gguf
- Medium (4-8GB):
mistral-7b-instruct-v0.1.Q5_K_M.gguf
- Large (8GB+):
llama-2-13b-chat.Q4_K_M.gguf
💡 See Model Comparisons: Visit our homepage model section to see detailed comparisons and performance benchmarks for different models.
Step 2: Load the Model
- Click “Select GGUF Model” in the main window
- Browse and select your downloaded
.gguf
file - Wait for loading (this may take 1-5 minutes depending on model size)
- Look for “Model ready!” message
💬 Test Basic Chat
Try Your First Chat
- Type a message in the chat input box
- Press Enter or click “Send”
- Watch the AI respond in real-time
- Continue the conversation!
Example Conversations:
You: Hello! How are you today?
AI: Hello! I'm doing well, thank you for asking. I'm here and ready to help you with any questions or tasks you might have. How can I assist you today?
You: Can you explain what GGUF models are?
AI: GGUF (GPT-Generated Unified Format) is a file format designed for storing large language models...
✨ Use the Smart Floating Assistant
The Smart Floating Assistant is GGUF Loader’s killer feature - it works across ALL applications! See it in action on our homepage to understand its full potential.
Step 1: Select Text Anywhere
- Open any application (browser, Word, Notepad, etc.)
- Select some text by highlighting it with your mouse
- Look for the ✨ floating button that appears near your cursor
Step 2: Process the Text
- Click the ✨ button
- Choose an action:
- Summarize: Get a concise summary
- Comment: Generate an insightful comment
- Wait for AI processing
- Copy the result and use it anywhere!
Example Workflow:
1. Reading a long article in your browser
2. Select a complex paragraph
3. Click ✨ → "Summarize"
4. Get: "This paragraph explains that..."
5. Copy and paste the summary into your notes
🎛️ Basic Settings
Model Settings
- Temperature: Controls creativity (0.1 = focused, 1.0 = creative)
- Max Tokens: Maximum response length
- Context Size: How much conversation history to remember
Smart Assistant Settings
- Enable/Disable: Toggle the floating assistant
- Response Speed: Adjust processing speed vs quality
- Auto-copy: Automatically copy results to clipboard
🔧 Common Tasks
Task 1: Summarize Articles
- Open an article in your browser
- Select the main content
- Click ✨ → “Summarize”
- Get a concise summary
Task 2: Generate Comments
- Select a social media post or forum discussion
- Click ✨ → “Comment”
- Get a thoughtful response
- Edit and post your AI-assisted comment
Task 3: Process Code
- Select code in your IDE or GitHub
- Click ✨ → “Comment”
- Get code explanation or suggestions
Task 4: Email Assistance
- Select email content
- Click ✨ → “Summarize” for long emails
- Click ✨ → “Comment” to draft responses
🎨 Customization
Addon Management
- Click the addon sidebar (left panel)
- View loaded addons
- Click addon names to open their interfaces
- Manage addon settings
Themes and UI
- Dark/Light Mode: Available in settings
- Font Size: Adjustable for better readability
- Window Layout: Resizable panels
🐛 Quick Troubleshooting
Model Won’t Load
- Check file format: Must be
.gguf
- Check file size: Ensure enough RAM
- Try smaller model: Start with 4GB or less
Floating Assistant Not Working
- Check model: Must have a model loaded first
- Try different text: Select at least 5+ characters
- Restart application: Sometimes helps with initialization
Performance Issues
- Close other apps: Free up RAM
- Use smaller model: Try Q4 quantization
- Enable GPU: If you have compatible hardware
📚 Next Steps
Learn More
- Addon Development: Create custom addons
- API Reference: Technical documentation
- Package Structure: Technical architecture
Get Involved
- GitHub: Source code and issues
- Discussions: Community support
- Smart Floater Example: Study the built-in addon
🏠 Explore Homepage Features
- Interactive Demos - Experience features before diving deeper
- Community Showcase - Get inspired by community projects
- Download Options - Find the right version for your system
💡 Pro Tips
Efficiency Tips
- Use keyboard shortcuts: Learn the hotkeys for faster workflow
- Bookmark good models: Keep a list of your favorite GGUF models
- Organize your workflow: Use the floating assistant for specific tasks
- Experiment with prompts: Different phrasings get different results
Model Selection Tips
- Start small: Begin with 7B parameter models
- Match your hardware: Don’t exceed your RAM capacity (see Installation Guide)
- Try different quantizations: Q4_K_M is a good balance
- Read model cards: Check Hugging Face for model details
For more technical details about how models are loaded and managed, see the Package Structure documentation.
Smart Assistant Tips
- Select quality text: Better input = better output
- Use specific actions: “Summarize” vs “Comment” give different results
- Edit the results: AI output is a starting point, not final copy
- Practice regularly: The more you use it, the more useful it becomes
🎉 You’re Ready!
Congratulations! You now know the basics of GGUF Loader 2.0.0. The Smart Floating Assistant will transform how you work with text across all your applications.
What You’ve Learned:
- ✅ How to install and launch GGUF Loader
- ✅ How to load and use GGUF models
- ✅ How to use the Smart Floating Assistant
- ✅ Basic troubleshooting and customization
Ready for More?
- 🛠️ Create addons with the Development Guide
- 📚 Study the Smart Floater Example
- 🔧 Explore the API Reference
- 🤝 Join our community discussions
🎊 You’re Now a GGUF Loader User!
Congratulations on completing the Quick Start guide! You now have the foundation to use GGUF Loader effectively.
What You’ve Accomplished
- ✅ Installed and launched GGUF Loader
- ✅ Loaded your first GGUF model
- ✅ Used the Smart Floating Assistant
- ✅ Learned basic troubleshooting
Ready for the Next Level?
Now that you’re comfortable with the basics, here are some exciting paths to explore:
🛠️ Become a Power User
- Create Custom Addons - Extend GGUF Loader with your own features
- Study the Smart Floater Code - Understand how the built-in addon works
- Explore the API - Dive into the technical details
🌟 Join the Community
- Share Your Experience - Tell others about your GGUF Loader journey
- Browse Community Projects - Get inspired by what others have created
- Contribute Ideas - Help shape the future of GGUF Loader
🚀 Explore Advanced Features
- Try Different Models - Experiment with various models for different tasks
- Watch Feature Demos - See advanced features in action
- Read Advanced Tutorials - Master complex workflows
Happy AI-powered text processing! 🚀
Need help? Contact us at support@ggufloader.com or visit our support page.
Related Documentation
-
installation guide | gguf loader installation guide
complete guide to installing gguf loader on windows, macos, and linux systems
beginner 5 minutesthis guide will help you install gguf loader 2.0.0 on your system. want to see what gguf loader can do first? explore the features on our homepage.
🚀 quick installation
using pip (recommended)
pip install ggufloader
that’s it! you can now run gguf loader with:
ggufloader
📋 system requirements
minimum requirements
- python: 3.8 or higher
- ram: 4gb (8gb+ recommended for larger models)
- storage: 2gb free space for models
- os: windows 10/11, macos 10.14+, or linux
recommended requirements
- python: 3.10 or higher
- ram: 16gb or more
- gpu: nvidia gpu with cuda support (optional but recommended)
- storage: 10gb+ free space for multiple models
🔧 detailed installation
step 1: install python
if you don’t have python installed:
windows
- download python from python.org
- run the installer and check “add python to path”
- verify installation:
python --version
macos
# using homebrew (recommended) brew install python # or download from python.org
linux (ubuntu/debian)
sudo apt update sudo apt install python3 python3-pip
step 2: create virtual environment (recommended)
# create virtual environment python -m venv ggufloader-env # activate it # windows: ggufloader-env\scripts\activate # macos/linux: source ggufloader-env/bin/activate
step 3: install gguf loader
pip install ggufloader
step 4: verify installation
ggufloader --version
🎮 first run
launch gguf loader
ggufloader
this will open the gguf loader application with the smart floating assistant addon already loaded. you can see this feature demonstrated on the homepage features section.
load your first model
- download a gguf model (e.g., from hugging face) - browse recommended models on our homepage
- click “select gguf model” in the application
- choose your model file
- wait for loading (may take a few minutes)
- start chatting!
💡 see it in action: check out the model loading demonstration on our homepage to see this process in action.
🔧 gpu acceleration (optional)
for better performance with larger models, you can enable gpu acceleration:
nvidia gpu (cuda)
# uninstall cpu version pip uninstall llama-cpp-python # install gpu version pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
apple silicon (metal)
# uninstall cpu version pip uninstall llama-cpp-python # install metal version cmake_args="-dllama_metal=on" pip install llama-cpp-python
🛠️ advanced installation
development installation
if you want to contribute or modify gguf loader, or develop custom addons:
# clone the repository git clone https://github.com/gguf-loader/gguf-loader.git cd gguf-loader # install in development mode pip install -e . # install development dependencies pip install -e .[dev]
for addon development, see our addon development guide and api reference.
custom installation location
# install to specific directory pip install --target /path/to/directory ggufloader # add to python path export pythonpath="/path/to/directory:$pythonpath"
🐛 troubleshooting installation
common issues
issue: “pip not found”
# windows python -m pip install ggufloader # macos/linux python3 -m pip install ggufloader
issue: “permission denied”
# use --user flag pip install --user ggufloader # or use virtual environment (recommended) python -m venv venv source venv/bin/activate # linux/macos # or venv\scripts\activate # windows pip install ggufloader
issue: “package not found”
# update pip first pip install --upgrade pip # then install pip install ggufloader
issue: “ssl certificate error”
# use trusted hosts pip install --trusted-host pypi.org --trusted-host pypi.python.org ggufloader
platform-specific issues
windows
- issue: “microsoft visual c++ 14.0 is required”
- solution: install microsoft c++ build tools
macos
- issue: “command line tools not found”
- solution:
xcode-select --install
- solution:
linux
- issue: missing system dependencies
# ubuntu/debian sudo apt install build-essential python3-dev # centos/rhel sudo yum groupinstall "development tools" sudo yum install python3-devel
🔄 updating gguf loader
update to latest version
pip install --upgrade ggufloader
check current version
ggufloader --version
downgrade if needed
pip install ggufloader==1.0.0 # replace with desired version
🗑️ uninstallation
remove gguf loader
pip uninstall ggufloader
clean up dependencies (optional)
# list installed packages pip list # remove specific dependencies if not needed elsewhere pip uninstall llama-cpp-python pyside6 pyautogui pyperclip
remove configuration files
# windows rmdir /s "%appdata%\ggufloader" # macos/linux rm -rf ~/.config/ggufloader rm -rf ~/.local/share/ggufloader
📦 alternative installation methods
using conda
# create conda environment conda create -n ggufloader python=3.10 conda activate ggufloader # install via pip (no conda package yet) pip install ggufloader
using pipx (isolated installation)
# install pipx if not already installed pip install pipx # install ggufloader in isolated environment pipx install ggufloader # run ggufloader
from source
# download source wget https://github.com/gguf-loader/gguf-loader/archive/v2.0.0.tar.gz tar -xzf v2.0.0.tar.gz cd gguf-loader-2.0.0 # install pip install .
🎯 next steps
after installation:
- read the quick start guide to get up and running
- explore addon development to create custom addons
- learn about the smart floating assistant built-in addon
- join our community for support and discussions
🏠 explore more on homepage
- see gguf loader in action - watch live demos of key features
- download for your platform - get the right version for your system
- browse model collection - find the perfect model for your needs
- read user stories - see how others use gguf loader
💡 need help?
- 📖 check the troubleshooting guide
- 🐛 report installation issues
- 💬 ask for help in discussions
- 📧 contact us: support@ggufloader.com
related topics
before proceeding, you might want to understand:
- gguf loader package structure - how gguf loader is organized
- system requirements and compatibility - ensure your system is compatible
🎉 congratulations!
you’ve successfully installed gguf loader! here’s what you can do next:
immediate next steps
- start the quick start guide - load your first model and start chatting
- browse the model library - discover curated models for different use cases
- explore features - learn about the smart floating assistant and other powerful features
ready to go deeper?
- try the interactive demos - experience gguf loader’s features live on our homepage
- join the community - share your experience and get help from other users
- read success stories - learn how others are using gguf loader
welcome to gguf loader! 🎉
related documentation
- get up and running with gguf loader 2.0.0 in just a few minutes! new to gguf loader? [check out the homepage](/ "discover gguf loader's features and capabilities") to see what makes it special. ## 🚀 installation ```bash pip install ggufloader ``` need detailed installation instructions? check out our [installation guide](/docs/installation/ "complete installation guide for all platforms") for platform-specific instructions and troubleshooting. ## 🎯 launch gguf loader ```bash ggufloader ``` this opens gguf loader with the smart floating assistant addon already loaded. ## 📥 load your first model ### step 1: get a gguf model download a gguf model from one of these sources: #### popular model sources: - **hugging face**: [https://huggingface.co/models?library=gguf](https://huggingface.co/models?library=gguf) - **thebloke's models**: search for "thebloke" on hugging face - **local models**: convert your own models to gguf format #### recommended starter models: - **small (2-4gb)**: `llama-2-7b-chat.q4_k_m.gguf` - **medium (4-8gb)**: `mistral-7b-instruct-v0.1.q5_k_m.gguf` - **large (8gb+)**: `llama-2-13b-chat.q4_k_m.gguf` 💡 **see model comparisons**: visit our [homepage model section](/#models "compare models and see performance benchmarks") to see detailed comparisons and performance benchmarks for different models. ### step 2: load the model 1. **click "select gguf model"** in the main window 2. **browse and select** your downloaded `.gguf` file 3. **wait for loading** (this may take 1-5 minutes depending on model size) 4. **look for "model ready!"** message ## 💬 test basic chat ### try your first chat 1. **type a message** in the chat input box 2. **press enter** or click "send" 3. **watch the ai respond** in real-time 4. **continue the conversation!** #### example conversations: ``` you: hello! how are you today? ai: hello! i'm doing well, thank you for asking. i'm here and ready to help you with any questions or tasks you might have. how can i assist you today? you: can you explain what gguf models are? ai: gguf (gpt-generated unified format) is a file format designed for storing large language models... ``` ## ✨ use the smart floating assistant the smart floating assistant is gguf loader's killer feature - it works across all applications! [see it in action on our homepage](/#features "watch the smart floating assistant demo") to understand its full potential. ### step 1: select text anywhere 1. **open any application** (browser, word, notepad, etc.) 2. **select some text** by highlighting it with your mouse 3. **look for the ✨ floating button** that appears near your cursor ### step 2: process the text 1. **click the ✨ button** 2. **choose an action**: - **summarize**: get a concise summary - **comment**: generate an insightful comment 3. **wait for ai processing** 4. **copy the result** and use it anywhere! #### example workflow: ``` 1. reading a long article in your browser 2. select a complex paragraph 3. click ✨ → "summarize" 4. get: "this paragraph explains that..." 5. copy and paste the summary into your notes ``` ## 🎛️ basic settings ### model settings - **temperature**: controls creativity (0.1 = focused, 1.0 = creative) - **max tokens**: maximum response length - **context size**: how much conversation history to remember ### smart assistant settings - **enable/disable**: toggle the floating assistant - **response speed**: adjust processing speed vs quality - **auto-copy**: automatically copy results to clipboard ## 🔧 common tasks ### task 1: summarize articles 1. **open an article** in your browser 2. **select the main content** 3. **click ✨ → "summarize"** 4. **get a concise summary** ### task 2: generate comments 1. **select a social media post** or forum discussion 2. **click ✨ → "comment"** 3. **get a thoughtful response** 4. **edit and post** your ai-assisted comment ### task 3: process code 1. **select code** in your ide or github 2. **click ✨ → "comment"** 3. **get code explanation** or suggestions ### task 4: email assistance 1. **select email content** 2. **click ✨ → "summarize"** for long emails 3. **click ✨ → "comment"** to draft responses ## 🎨 customization ### addon management 1. **click the addon sidebar** (left panel) 2. **view loaded addons** 3. **click addon names** to open their interfaces 4. **manage addon settings** ### themes and ui - **dark/light mode**: available in settings - **font size**: adjustable for better readability - **window layout**: resizable panels ## 🐛 quick troubleshooting ### model won't load - **check file format**: must be `.gguf` - **check file size**: ensure enough ram - **try smaller model**: start with 4gb or less ### floating assistant not working - **check model**: must have a model loaded first - **try different text**: select at least 5+ characters - **restart application**: sometimes helps with initialization ### performance issues - **close other apps**: free up ram - **use smaller model**: try q4 quantization - **enable gpu**: if you have compatible hardware ## 📚 next steps ### learn more - **[addon development](/docs/addon-development/ "create custom gguf loader addons")**: create custom addons - **[api reference](/docs/addon-api/ "complete api documentation")**: technical documentation - **[package structure](/docs/package-structure/ "understanding gguf loader's architecture")**: technical architecture ### get involved - **[github](https://github.com/gguf-loader/gguf-loader "source code and issue tracker")**: source code and issues - **[discussions](https://github.com/gguf-loader/gguf-loader/discussions "community support forum")**: community support - **[smart floater example](/docs/smart-floater-example/ "learn from the built-in addon")**: study the built-in addon ### 🏠 explore homepage features - **[interactive demos](/#features "try gguf loader features live")** - experience features before diving deeper - **[community showcase](/#community "see what others are building")** - get inspired by community projects - **[download options](/#download "get gguf loader for your platform")** - find the right version for your system ## 💡 pro tips ### efficiency tips 1. **use keyboard shortcuts**: learn the hotkeys for faster workflow 2. **bookmark good models**: keep a list of your favorite gguf models 3. **organize your workflow**: use the floating assistant for specific tasks 4. **experiment with prompts**: different phrasings get different results ### model selection tips 1. **start small**: begin with 7b parameter models 2. **match your hardware**: don't exceed your ram capacity (see [installation guide](/docs/installation/#system-requirements "system requirements and hardware recommendations")) 3. **try different quantizations**: q4_k_m is a good balance 4. **read model cards**: check hugging face for model details for more technical details about how models are loaded and managed, see the [package structure documentation](/docs/package-structure/ "technical architecture and model loading details"). ### smart assistant tips 1. **select quality text**: better input = better output 2. **use specific actions**: "summarize" vs "comment" give different results 3. **edit the results**: ai output is a starting point, not final copy 4. **practice regularly**: the more you use it, the more useful it becomes ## 🎉 you're ready! congratulations! you now know the basics of gguf loader 2.0.0. the smart floating assistant will transform how you work with text across all your applications. ### what you've learned: - ✅ how to install and launch gguf loader - ✅ how to load and use gguf models - ✅ how to use the smart floating assistant - ✅ basic troubleshooting and customization ### ready for more? - 🛠️ create addons with the [development guide](/docs/addon-development/ "complete addon development tutorial") - 📚 study the [smart floater example](/docs/smart-floater-example/ "real-world addon example with full source code") - 🔧 explore the [api reference](/docs/addon-api/ "detailed api documentation for developers") - 🤝 join our [community discussions](https://github.com/gguf-loader/gguf-loader/discussions "community support and addon sharing") --- ## 🎊 you're now a gguf loader user! congratulations on completing the quick start guide! you now have the foundation to use gguf loader effectively. ### what you've accomplished - ✅ installed and launched gguf loader - ✅ loaded your first gguf model - ✅ used the smart floating assistant - ✅ learned basic troubleshooting ### ready for the next level? now that you're comfortable with the basics, here are some exciting paths to explore: #### 🛠️ **become a power user** - **[create custom addons](/docs/addon-development/ "build your own extensions")** - extend gguf loader with your own features - **[study the smart floater code](/docs/smart-floater-example/ "learn from real examples")** - understand how the built-in addon works - **[explore the api](/docs/addon-api/ "technical documentation")** - dive into the technical details #### 🌟 **join the community** - **[share your experience](/#community "community showcase")** - tell others about your gguf loader journey - **[browse community projects](/#community "see what others built")** - get inspired by what others have created - **[contribute ideas](/#faq "feedback and suggestions")** - help shape the future of gguf loader #### 🚀 **explore advanced features** - **[try different models](/#models "model comparison")** - experiment with various models for different tasks - **[watch feature demos](/#features "interactive demonstrations")** - see advanced features in action - **[read advanced tutorials](/#how-to "step-by-step guides")** - master complex workflows **happy ai-powered text processing! 🚀** need help? contact us at support@ggufloader.com or visit our [support page](/docs/troubleshooting/).
- troubleshooting
- user-guide
🔗 related features
🎯 what's next?
you've completed this guide! here are some suggested next steps to continue your gguf loader journey:
🚀start using gguf loader
now that you have gguf loader installed, follow our quick start guide to load your first model and start chatting.
quick start guide →🤖browse models
explore our curated collection of gguf models to find the perfect one for your needs.
browse models →✨🌟 explore more features
🏠 back to homepage
- User-guide
-
addon development guide | gguf loader addon development guide
learn to create custom addons for gguf loader with examples and best practices
advanced 15 minutesthis guide will teach you how to create custom addons for gguf loader 2.0.0. addons extend the functionality of gguf loader and can provide new features, ui components, and integrations. want to see existing addons in action? check out the addon showcase on our homepage.
🏗️ addon architecture
what is an addon?
an addon is a python package that extends gguf loader’s functionality. addons can:
- add new ui components and windows
- process text and interact with ai models
- integrate with external services
- provide new workflows and automation
- extend the main application’s capabilities
🎯 see real examples: visit the homepage community section to see what others have built and get inspiration for your own addons.
addon structure
every addon must follow this basic structure:
addons/ └── your_addon_name/ ├── __init__.py # addon entry point ├── main.py # main addon logic ├── ui.py # ui components (optional) ├── config.py # configuration (optional) └── readme.md # addon documentation
🚀 creating your first addon
step 1: create the addon directory
mkdir -p addons/my_awesome_addon cd addons/my_awesome_addon
step 2: create the entry point (
__init__.py
)""" my awesome addon - a sample addon for gguf loader this addon demonstrates the basic structure and capabilities of the gguf loader addon system. """ __version__ = "1.0.0" __author__ = "your name" __description__ = "a sample addon that demonstrates basic functionality" # import the register function from .main import register # export the register function __all__ = ["register"]
step 3: create the main logic (
main.py
)""" main logic for my awesome addon """ import logging from pyside6.qtwidgets import qwidget, qvboxlayout, qlabel, qpushbutton, qtextedit from pyside6.qtcore import qtimer class myawesomeaddon: """main addon class that handles the addon functionality.""" def __init__(self, gguf_app): """initialize the addon with reference to the main gguf app.""" self.gguf_app = gguf_app self.logger = logging.getlogger(__name__) self.is_running = false # initialize your addon components here self.setup_addon() def setup_addon(self): """setup the addon components.""" self.logger.info("setting up my awesome addon") # add your initialization logic here def get_model(self): """get the currently loaded gguf model.""" try: if hasattr(self.gguf_app, 'model') and self.gguf_app.model: return self.gguf_app.model elif hasattr(self.gguf_app, 'ai_chat') and hasattr(self.gguf_app.ai_chat, 'model'): return self.gguf_app.ai_chat.model return none except exception as e: self.logger.error(f"error getting model: {e}") return none def process_text_with_ai(self, text, prompt_template="process this text: {text}"): """process text using the loaded ai model.""" model = self.get_model() if not model: return "error: no ai model loaded" try: prompt = prompt_template.format(text=text) response = model( prompt, max_tokens=200, temperature=0.7, stop=["</s>", "\n\n"] ) # extract text from response if isinstance(response, dict) and 'choices' in response: return response['choices'][0].get('text', '').strip() elif isinstance(response, str): return response.strip() else: return str(response).strip() except exception as e: self.logger.error(f"error processing text: {e}") return f"error: {str(e)}" def start(self): """start the addon.""" self.is_running = true self.logger.info("my awesome addon started") def stop(self): """stop the addon.""" self.is_running = false self.logger.info("my awesome addon stopped") class myawesomeaddonwidget(qwidget): """ui widget for the addon.""" def __init__(self, addon_instance): super().__init__() self.addon = addon_instance self.setup_ui() def setup_ui(self): """setup the addon ui.""" self.setwindowtitle("my awesome addon") self.setminimumsize(400, 300) layout = qvboxlayout(self) # title title = qlabel("🚀 my awesome addon") title.setstylesheet("font-size: 18px; font-weight: bold; margin: 10px;") layout.addwidget(title) # description description = qlabel("this is a sample addon that demonstrates basic functionality.") description.setwordwrap(true) layout.addwidget(description) # input area layout.addwidget(qlabel("enter text to process:")) self.input_text = qtextedit() self.input_text.setmaximumheight(100) self.input_text.setplaceholdertext("type some text here...") layout.addwidget(self.input_text) # process button self.process_btn = qpushbutton("🤖 process with ai") self.process_btn.clicked.connect(self.process_text) layout.addwidget(self.process_btn) # output area layout.addwidget(qlabel("ai response:")) self.output_text = qtextedit() self.output_text.setreadonly(true) layout.addwidget(self.output_text) # status self.status_label = qlabel("ready") self.status_label.setstylesheet("color: green;") layout.addwidget(self.status_label) def process_text(self): """process the input text with ai.""" input_text = self.input_text.toplaintext().strip() if not input_text: self.output_text.settext("please enter some text to process.") return self.status_label.settext("processing...") self.status_label.setstylesheet("color: orange;") self.process_btn.setenabled(false) # process with ai (using qtimer to avoid blocking ui) qtimer.singleshot(100, lambda: self._do_processing(input_text)) def _do_processing(self, text): """actually process the text.""" try: result = self.addon.process_text_with_ai( text, "please provide a helpful and insightful response to: {text}" ) self.output_text.settext(result) self.status_label.settext("complete!") self.status_label.setstylesheet("color: green;") except exception as e: self.output_text.settext(f"error: {str(e)}") self.status_label.settext("error occurred") self.status_label.setstylesheet("color: red;") finally: self.process_btn.setenabled(true) def register(parent=none): """ register function called by gguf loader when loading the addon. args: parent: the main gguf loader application instance returns: qwidget: the addon's ui widget, or none for background addons """ try: # create the addon instance addon = myawesomeaddon(parent) addon.start() # store addon reference in parent for lifecycle management if not hasattr(parent, '_addons'): parent._addons = {} parent._addons['my_awesome_addon'] = addon # create and return the ui widget widget = myawesomeaddonwidget(addon) return widget except exception as e: logging.error(f"failed to register my awesome addon: {e}") return none
step 4: test your addon
- place your addon in the
addons/
directory (see package structure for details) - launch gguf loader:
ggufloader
(see installation guide if needed) - load a gguf model in the main application (follow the quick start guide if needed)
- click your addon in the addon sidebar
- test the functionality
🎨 advanced addon features
background addons
some addons don’t need a ui and run in the background:
def register(parent=none): """register a background addon.""" try: addon = mybackgroundaddon(parent) addon.start() # store reference but return none (no ui) parent._my_background_addon = addon return none except exception as e: logging.error(f"failed to register background addon: {e}") return none
global hotkeys and text selection
learn from the smart floating assistant addon:
from pyside6.qtcore import qtimer import pyautogui import pyperclip class textselectionaddon: def __init__(self, gguf_app): self.gguf_app = gguf_app self.selected_text = "" # timer for checking text selection self.selection_timer = qtimer() self.selection_timer.timeout.connect(self.check_selection) self.selection_timer.start(500) # check every 500ms def check_selection(self): """check for text selection.""" try: # save current clipboard original_clipboard = pyperclip.paste() # copy selection pyautogui.hotkey('ctrl', 'c') # check if we got new text qtimer.singleshot(50, lambda: self.process_selection(original_clipboard)) except: pass def process_selection(self, original_clipboard): """process the selected text.""" try: current_text = pyperclip.paste() if current_text != original_clipboard and len(current_text.strip()) > 3: self.selected_text = current_text.strip() self.on_text_selected(self.selected_text) # restore clipboard pyperclip.copy(original_clipboard) except: pass def on_text_selected(self, text): """handle text selection event.""" # your custom logic here print(f"text selected: {text[:50]}...")
model integration
access and use the loaded gguf model:
def use_model_for_processing(self, text): """use the gguf model for text processing.""" model = self.get_model() if not model: return "no model loaded" try: # different processing modes response = model( f"analyze this text: {text}", max_tokens=300, temperature=0.7, top_p=0.9, repeat_penalty=1.1, stop=["</s>", "human:", "user:"] ) return self.extract_response_text(response) except exception as e: return f"error: {str(e)}" def extract_response_text(self, response): """extract text from model response.""" if isinstance(response, dict) and 'choices' in response: return response['choices'][0].get('text', '').strip() elif isinstance(response, str): return response.strip() else: return str(response).strip()
📋 addon best practices
1. error handling
always wrap your code in try-catch blocks:
def safe_operation(self): try: # your code here pass except exception as e: self.logger.error(f"operation failed: {e}") return none
2. resource cleanup
implement proper cleanup:
def stop(self): """clean up addon resources.""" if hasattr(self, 'timer'): self.timer.stop() if hasattr(self, 'ui_components'): for component in self.ui_components: component.close() self.logger.info("addon stopped and cleaned up")
3. configuration
support user configuration:
import json import os class addonconfig: def __init__(self, addon_name): self.config_file = f"config/{addon_name}_config.json" self.default_config = { "enabled": true, "hotkey": "ctrl+shift+a", "auto_process": false } self.config = self.load_config() def load_config(self): try: if os.path.exists(self.config_file): with open(self.config_file, 'r') as f: return {**self.default_config, **json.load(f)} except: pass return self.default_config.copy() def save_config(self): os.makedirs(os.path.dirname(self.config_file), exist_ok=true) with open(self.config_file, 'w') as f: json.dump(self.config, f, indent=2)
4. logging
use proper logging:
import logging class myaddon: def __init__(self, gguf_app): self.logger = logging.getlogger(f"addon.{self.__class__.__name__}") self.logger.setlevel(logging.info) # log addon initialization self.logger.info("addon initialized") def process_data(self, data): self.logger.debug(f"processing data: {len(data)} items") try: # process data result = self.do_processing(data) self.logger.info("data processed successfully") return result except exception as e: self.logger.error(f"processing failed: {e}") raise
🔧 testing your addon
unit testing
create tests for your addon:
# test_my_addon.py import unittest from unittest.mock import mock, magicmock from addons.my_awesome_addon.main import myawesomeaddon class testmyawesomeaddon(unittest.testcase): def setup(self): self.mock_gguf_app = mock() self.addon = myawesomeaddon(self.mock_gguf_app) def test_addon_initialization(self): self.assertisnotnone(self.addon) self.assertequal(self.addon.gguf_app, self.mock_gguf_app) def test_text_processing(self): # mock the model mock_model = mock() mock_model.return_value = "processed text" self.mock_gguf_app.model = mock_model result = self.addon.process_text_with_ai("test text") self.assertequal(result, "processed text") if __name__ == '__main__': unittest.main()
integration testing
test with the actual gguf loader:
# test_integration.py def test_addon_with_gguf_loader(): """test addon integration with gguf loader.""" # this would be run with actual gguf loader instance pass
📦 distributing your addon
1. create documentation
create a
readme.md
for your addon:# my awesome addon a powerful addon for gguf loader that provides [functionality]. ## features - feature 1 - feature 2 - feature 3 ## installation 1. copy the addon to `addons/my_awesome_addon/` 2. restart gguf loader 3. click on the addon in the sidebar ## configuration [configuration instructions] ## usage [usage instructions]
2. version your addon
use semantic versioning in
__init__.py
:__version__ = "1.0.0" # major.minor.patch
3. share with community
- create a github repository
- add installation instructions
- include screenshots and examples
- submit to the community addon registry
💡 get featured: outstanding addons may be featured on our homepage community showcase - share your creation with the world!
🤝 contributing to core
want to contribute to gguf loader itself? check out our contributing guide.
📚 additional resources
- addon api reference - complete api documentation
- smart floater example - learn from the built-in addon
- package structure - technical architecture details
- installation guide - development environment setup
🎉 you’re ready to build amazing addons!
congratulations on completing the addon development guide! you now have the knowledge and tools to create powerful extensions for gguf loader.
what you’ve learned
- ✅ addon architecture and structure
- ✅ creating ui components and background services
- ✅ integrating with ai models
- ✅ best practices and testing strategies
- ✅ distribution and community sharing
your next steps as an addon developer
🚀 start building
- create your first addon - put your knowledge into practice with a simple project
- study real examples - see how others have implemented creative solutions
- join developer discussions - connect with other addon developers
🌟 share and grow
- showcase your work - get your addon featured on our homepage
- contribute to core - help improve gguf loader itself
- mentor others - share your expertise with new developers
🔥 advanced challenges
- build complex integrations - create addons that work with external services
- optimize performance - learn advanced techniques for efficient addons
- create addon libraries - build reusable components for the community
💡 get inspired
visit our homepage community section to see what other developers have created and get inspiration for your next project!
happy addon development! 🎉
need help? join our community discussions or contact us at support@ggufloader.com.
related documentation
-
addon api reference | gguf loader addon api reference
complete api reference for developing gguf loader addons
advanced 20 minutescomplete api reference for developing gguf loader addons.
🏗️ core api
addon registration
every addon must implement a
register()
function:def register(parent=none): """ register function called by gguf loader when loading the addon. args: parent: the main gguf loader application instance returns: qwidget: the addon's ui widget, or none for background addons """ pass
main application interface
the
parent
parameter provides access to the main gguf loader application:class ggufloaderapp: """main gguf loader application interface.""" # properties model: optional[any] # currently loaded gguf model ai_chat: aichat # ai chat interface addon_manager: addonmanager # addon management system # methods def get_model_backend(self) -> optional[any]: """get the current model backend for addons.""" def is_model_loaded(self) -> bool: """check if a model is currently loaded.""" # signals model_loaded = signal(object) # emitted when model is loaded model_unloaded = signal() # emitted when model is unloaded
🤖 model api
accessing the model
def get_model(self, gguf_app): """get the currently loaded gguf model.""" try: # method 1: direct access if hasattr(gguf_app, 'model') and gguf_app.model: return gguf_app.model # method 2: through ai chat if hasattr(gguf_app, 'ai_chat') and hasattr(gguf_app.ai_chat, 'model'): return gguf_app.ai_chat.model # method 3: backend method if hasattr(gguf_app, 'get_model_backend'): return gguf_app.get_model_backend() return none except exception as e: logging.error(f"error getting model: {e}") return none
model interface
class llamamodel: """gguf model interface (llama-cpp-python).""" def __call__(self, prompt: str, max_tokens: int = 256, temperature: float = 0.7, top_p: float = 0.9, top_k: int = 40, repeat_penalty: float = 1.1, stop: list[str] = none, stream: bool = false) -> union[str, dict, iterator]: """generate text from the model.""" pass def tokenize(self, text: str) -> list[int]: """tokenize text.""" pass def detokenize(self, tokens: list[int]) -> str: """detokenize tokens to text.""" pass
text generation
def generate_text(self, model, prompt: str, **kwargs) -> str: """generate text using the model.""" try: response = model( prompt, max_tokens=kwargs.get('max_tokens', 200), temperature=kwargs.get('temperature', 0.7), top_p=kwargs.get('top_p', 0.9), repeat_penalty=kwargs.get('repeat_penalty', 1.1), stop=kwargs.get('stop', ["</s>", "\n\n"]), stream=false ) return self.extract_response_text(response) except exception as e: logging.error(f"text generation failed: {e}") return f"error: {str(e)}" def extract_response_text(self, response) -> str: """extract text from model response.""" if isinstance(response, dict) and 'choices' in response: return response['choices'][0].get('text', '').strip() elif isinstance(response, str): return response.strip() else: return str(response).strip()
🎨 ui api
widget creation
from pyside6.qtwidgets import qwidget, qvboxlayout, qlabel, qpushbutton from pyside6.qtcore import qtimer, signal class addonwidget(qwidget): """base addon widget class.""" # signals text_processed = signal(str) error_occurred = signal(str) def __init__(self, addon_instance): super().__init__() self.addon = addon_instance self.setup_ui() def setup_ui(self): """setup the widget ui.""" layout = qvboxlayout(self) # title title = qlabel("my addon") title.setstylesheet("font-size: 16px; font-weight: bold;") layout.addwidget(title) # content self.setup_content(layout) def setup_content(self, layout): """override this method to add custom content.""" pass
common ui components
# status indicator def create_status_indicator(self): """create a status indicator widget.""" self.status_label = qlabel("ready") self.status_label.setstylesheet(""" qlabel { padding: 5px; border-radius: 3px; background-color: #4caf50; color: white; } """) return self.status_label def update_status(self, message: str, status_type: str = "info"): """update status indicator.""" colors = { "info": "#2196f3", "success": "#4caf50", "warning": "#ff9800", "error": "#f44336" } self.status_label.settext(message) self.status_label.setstylesheet(f""" qlabel {colors.get(status_type, colors['info'])}; color: white; }} """) # progress indicator def create_progress_indicator(self): """create a progress indicator.""" from pyside6.qtwidgets import qprogressbar self.progress_bar = qprogressbar() self.progress_bar.setvisible(false) return self.progress_bar def show_progress(self, message: str = "processing..."): """show progress indicator.""" self.progress_bar.setvisible(true) self.progress_bar.setrange(0, 0) # indeterminate self.update_status(message, "info") def hide_progress(self): """hide progress indicator.""" self.progress_bar.setvisible(false)
floating ui components
from pyside6.qtcore import qt from pyside6.qtgui import qcursor class floatingwidget(qwidget): """create floating widgets like the smart assistant.""" def __init__(self): super().__init__() self.setup_floating_widget() def setup_floating_widget(self): """setup floating widget properties.""" self.setwindowflags( qt.tooltip | qt.framelesswindowhint | qt.windowstaysontophint ) self.setattribute(qt.wa_translucentbackground) def show_near_cursor(self, offset_x: int = 10, offset_y: int = -40): """show widget near cursor position.""" cursor_pos = qcursor.pos() self.move(cursor_pos.x() + offset_x, cursor_pos.y() + offset_y) self.show()
🔧 system integration api
text selection detection
import pyautogui import pyperclip from pyside6.qtcore import qtimer class textselectionmonitor: """monitor for global text selection.""" def __init__(self, callback): self.callback = callback self.last_clipboard = "" self.selected_text = "" # timer for checking selection self.timer = qtimer() self.timer.timeout.connect(self.check_selection) self.timer.start(300) # check every 300ms def check_selection(self): """check for text selection.""" try: # save current clipboard original_clipboard = pyperclip.paste() # copy selection pyautogui.hotkey('ctrl', 'c') # process after small delay qtimer.singleshot(50, lambda: self.process_selection(original_clipboard)) except exception as e: logging.debug(f"selection check failed: {e}") def process_selection(self, original_clipboard): """process the selection.""" try: current_text = pyperclip.paste() # check if we got new selected text if (current_text != original_clipboard and current_text and len(current_text.strip()) > 3): self.selected_text = current_text.strip() self.callback(self.selected_text) # restore clipboard pyperclip.copy(original_clipboard) except exception as e: logging.debug(f"selection processing failed: {e}") def stop(self): """stop monitoring.""" self.timer.stop()
clipboard integration
import pyperclip class clipboardmanager: """manage clipboard operations.""" @staticmethod def get_text() -> str: """get text from clipboard.""" try: return pyperclip.paste() except exception as e: logging.error(f"failed to get clipboard text: {e}") return "" @staticmethod def set_text(text: str) -> bool: """set text to clipboard.""" try: pyperclip.copy(text) return true except exception as e: logging.error(f"failed to set clipboard text: {e}") return false @staticmethod def append_text(text: str) -> bool: """append text to clipboard.""" try: current = clipboardmanager.get_text() new_text = f"{current}\n{text}" if current else text return clipboardmanager.set_text(new_text) except exception as e: logging.error(f"failed to append clipboard text: {e}") return false
hotkey registration
import keyboard class hotkeymanager: """manage global hotkeys.""" def __init__(self): self.registered_hotkeys = {} def register_hotkey(self, hotkey: str, callback, description: str = ""): """register a global hotkey.""" try: keyboard.add_hotkey(hotkey, callback) self.registered_hotkeys[hotkey] = { 'callback': callback, 'description': description } logging.info(f"registered hotkey: {hotkey}") return true except exception as e: logging.error(f"failed to register hotkey {hotkey}: {e}") return false def unregister_hotkey(self, hotkey: str): """unregister a hotkey.""" try: keyboard.remove_hotkey(hotkey) if hotkey in self.registered_hotkeys: del self.registered_hotkeys[hotkey] logging.info(f"unregistered hotkey: {hotkey}") return true except exception as e: logging.error(f"failed to unregister hotkey {hotkey}: {e}") return false def cleanup(self): """clean up all registered hotkeys.""" for hotkey in list(self.registered_hotkeys.keys()): self.unregister_hotkey(hotkey)
📁 configuration api
addon configuration
import json import os from pathlib import path class addonconfig: """manage addon configuration.""" def __init__(self, addon_name: str): self.addon_name = addon_name self.config_dir = path.home() / ".ggufloader" / "addons" / addon_name self.config_file = self.config_dir / "config.json" self.config = {} self.load_config() def load_config(self): """load configuration from file.""" try: if self.config_file.exists(): with open(self.config_file, 'r') as f: self.config = json.load(f) except exception as e: logging.error(f"failed to load config: {e}") self.config = {} def save_config(self): """save configuration to file.""" try: self.config_dir.mkdir(parents=true, exist_ok=true) with open(self.config_file, 'w') as f: json.dump(self.config, f, indent=2) except exception as e: logging.error(f"failed to save config: {e}") def get(self, key: str, default=none): """get configuration value.""" return self.config.get(key, default) def set(self, key: str, value): """set configuration value.""" self.config[key] = value self.save_config() def update(self, updates: dict): """update multiple configuration values.""" self.config.update(updates) self.save_config()
🔄 event system api
addon events
from pyside6.qtcore import qobject, signal class addoneventsystem(qobject): """event system for addon communication.""" # core events addon_loaded = signal(str) # addon_name addon_unloaded = signal(str) # addon_name model_changed = signal(object) # model text_selected = signal(str) # selected_text text_processed = signal(str, str) # original_text, processed_text def __init__(self): super().__init__() self.event_handlers = {} def emit_event(self, event_name: str, *args, **kwargs): """emit a custom event.""" if hasattr(self, event_name): signal = getattr(self, event_name) signal.emit(*args, **kwargs) def connect_event(self, event_name: str, handler): """connect to an event.""" if hasattr(self, event_name): signal = getattr(self, event_name) signal.connect(handler)
🧪 testing api
addon testing utilities
for comprehensive testing examples, see the smart floater example which includes both unit and integration tests.
import unittest from unittest.mock import mock, magicmock class addontestcase(unittest.testcase): """base test case for addon testing.""" def setup(self): """set up test environment.""" self.mock_gguf_app = mock() self.mock_model = mock() self.mock_gguf_app.model = self.mock_model def create_mock_model_response(self, text: str): """create a mock model response.""" return { 'choices': [{'text': text}] } def assert_model_called_with(self, expected_prompt: str): """assert model was called with expected prompt.""" self.mock_model.assert_called() call_args = self.mock_model.call_args self.assertin(expected_prompt, call_args[0][0]) # example test class testmyaddon(addontestcase): def test_text_processing(self): from addons.my_addon.main import myaddon addon = myaddon(self.mock_gguf_app) self.mock_model.return_value = self.create_mock_model_response("processed text") result = addon.process_text("input text") self.assertequal(result, "processed text") self.assert_model_called_with("input text")
📊 logging api
addon logging
import logging from pathlib import path class addonlogger: """logging utilities for addons.""" @staticmethod def setup_logger(addon_name: str, level=logging.info): """setup logger for addon.""" logger = logging.getlogger(f"addon.{addon_name}") logger.setlevel(level) # create file handler log_dir = path.home() / ".ggufloader" / "logs" log_dir.mkdir(parents=true, exist_ok=true) file_handler = logging.filehandler(log_dir / f"{addon_name}.log") file_handler.setlevel(level) # create formatter formatter = logging.formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) file_handler.setformatter(formatter) # add handler to logger logger.addhandler(file_handler) return logger # usage in addon logger = addonlogger.setup_logger("my_addon") logger.info("addon initialized") logger.error("something went wrong")
🔒 security api
safe execution
import subprocess import tempfile import os class safeexecution: """utilities for safe code execution.""" @staticmethod def run_command_safely(command: list, timeout: int = 30) -> tuple: """run command safely with timeout.""" try: result = subprocess.run( command, capture_output=true, text=true, timeout=timeout, check=false ) return result.returncode, result.stdout, result.stderr except subprocess.timeoutexpired: return -1, "", "command timed out" except exception as e: return -1, "", str(e) @staticmethod def create_temp_file(content: str, suffix: str = ".tmp") -> str: """create temporary file safely.""" with tempfile.namedtemporaryfile(mode='w', suffix=suffix, delete=false) as f: f.write(content) return f.name @staticmethod def cleanup_temp_file(filepath: str): """clean up temporary file.""" try: if os.path.exists(filepath): os.unlink(filepath) except exception as e: logging.error(f"failed to cleanup temp file: {e}")
📚 additional resources
- smart floater example - complete addon example
- addon development guide - step-by-step development guide
- package structure - technical architecture
- installation guide - development setup
need help with the api? join our community discussions or contact support@ggufloader.com
related documentation
- this guide will teach you how to create custom addons for gguf loader 2.0.0. addons extend the functionality of gguf loader and can provide new features, ui components, and integrations. want to see existing addons in action? [check out the addon showcase on our homepage](/#features "see community addons and examples"). ## 🏗️ addon architecture ### what is an addon? an addon is a python package that extends gguf loader's functionality. addons can: - add new ui components and windows - process text and interact with ai models - integrate with external services - provide new workflows and automation - extend the main application's capabilities 🎯 **see real examples**: visit the [homepage community section](/#community "browse community-created addons") to see what others have built and get inspiration for your own addons. ### addon structure every addon must follow this basic structure: ``` addons/ └── your_addon_name/ ├── __init__.py # addon entry point ├── main.py # main addon logic ├── ui.py # ui components (optional) ├── config.py # configuration (optional) └── readme.md # addon documentation ``` ## 🚀 creating your first addon ### step 1: create the addon directory ```bash mkdir -p addons/my_awesome_addon cd addons/my_awesome_addon ``` ### step 2: create the entry point (`__init__.py`) ```python """ my awesome addon - a sample addon for gguf loader this addon demonstrates the basic structure and capabilities of the gguf loader addon system. """ __version__ = "1.0.0" __author__ = "your name" __description__ = "a sample addon that demonstrates basic functionality" # import the register function from .main import register # export the register function __all__ = ["register"] ``` ### step 3: create the main logic (`main.py`) ```python """ main logic for my awesome addon """ import logging from pyside6.qtwidgets import qwidget, qvboxlayout, qlabel, qpushbutton, qtextedit from pyside6.qtcore import qtimer class myawesomeaddon: """main addon class that handles the addon functionality.""" def __init__(self, gguf_app): """initialize the addon with reference to the main gguf app.""" self.gguf_app = gguf_app self.logger = logging.getlogger(__name__) self.is_running = false # initialize your addon components here self.setup_addon() def setup_addon(self): """setup the addon components.""" self.logger.info("setting up my awesome addon") # add your initialization logic here def get_model(self): """get the currently loaded gguf model.""" try: if hasattr(self.gguf_app, 'model') and self.gguf_app.model: return self.gguf_app.model elif hasattr(self.gguf_app, 'ai_chat') and hasattr(self.gguf_app.ai_chat, 'model'): return self.gguf_app.ai_chat.model return none except exception as e: self.logger.error(f"error getting model: {e}") return none def process_text_with_ai(self, text, prompt_template="process this text: {text}"): """process text using the loaded ai model.""" model = self.get_model() if not model: return "error: no ai model loaded" try: prompt = prompt_template.format(text=text) response = model( prompt, max_tokens=200, temperature=0.7, stop=["", "\n\n"] ) # extract text from response if isinstance(response, dict) and 'choices' in response: return response['choices'][0].get('text', '').strip() elif isinstance(response, str): return response.strip() else: return str(response).strip() except exception as e: self.logger.error(f"error processing text: {e}") return f"error: {str(e)}" def start(self): """start the addon.""" self.is_running = true self.logger.info("my awesome addon started") def stop(self): """stop the addon.""" self.is_running = false self.logger.info("my awesome addon stopped") class myawesomeaddonwidget(qwidget): """ui widget for the addon.""" def __init__(self, addon_instance): super().__init__() self.addon = addon_instance self.setup_ui() def setup_ui(self): """setup the addon ui.""" self.setwindowtitle("my awesome addon") self.setminimumsize(400, 300) layout = qvboxlayout(self) # title title = qlabel("🚀 my awesome addon") title.setstylesheet("font-size: 18px; font-weight: bold; margin: 10px;") layout.addwidget(title) # description description = qlabel("this is a sample addon that demonstrates basic functionality.") description.setwordwrap(true) layout.addwidget(description) # input area layout.addwidget(qlabel("enter text to process:")) self.input_text = qtextedit() self.input_text.setmaximumheight(100) self.input_text.setplaceholdertext("type some text here...") layout.addwidget(self.input_text) # process button self.process_btn = qpushbutton("🤖 process with ai") self.process_btn.clicked.connect(self.process_text) layout.addwidget(self.process_btn) # output area layout.addwidget(qlabel("ai response:")) self.output_text = qtextedit() self.output_text.setreadonly(true) layout.addwidget(self.output_text) # status self.status_label = qlabel("ready") self.status_label.setstylesheet("color: green;") layout.addwidget(self.status_label) def process_text(self): """process the input text with ai.""" input_text = self.input_text.toplaintext().strip() if not input_text: self.output_text.settext("please enter some text to process.") return self.status_label.settext("processing...") self.status_label.setstylesheet("color: orange;") self.process_btn.setenabled(false) # process with ai (using qtimer to avoid blocking ui) qtimer.singleshot(100, lambda: self._do_processing(input_text)) def _do_processing(self, text): """actually process the text.""" try: result = self.addon.process_text_with_ai( text, "please provide a helpful and insightful response to: {text}" ) self.output_text.settext(result) self.status_label.settext("complete!") self.status_label.setstylesheet("color: green;") except exception as e: self.output_text.settext(f"error: {str(e)}") self.status_label.settext("error occurred") self.status_label.setstylesheet("color: red;") finally: self.process_btn.setenabled(true) def register(parent=none): """ register function called by gguf loader when loading the addon. args: parent: the main gguf loader application instance returns: qwidget: the addon's ui widget, or none for background addons """ try: # create the addon instance addon = myawesomeaddon(parent) addon.start() # store addon reference in parent for lifecycle management if not hasattr(parent, '_addons'): parent._addons = {} parent._addons['my_awesome_addon'] = addon # create and return the ui widget widget = myawesomeaddonwidget(addon) return widget except exception as e: logging.error(f"failed to register my awesome addon: {e}") return none ``` ### step 4: test your addon 1. **place your addon** in the `addons/` directory (see [package structure](/docs/package-structure/ "understanding addon directory structure") for details) 2. **launch gguf loader**: `ggufloader` (see [installation guide](/docs/installation/ "installation instructions") if needed) 3. **load a gguf model** in the main application (follow the [quick start guide](/docs/quick-start/ "getting started tutorial") if needed) 4. **click your addon** in the addon sidebar 5. **test the functionality** ## 🎨 advanced addon features ### background addons some addons don't need a ui and run in the background: ```python def register(parent=none): """register a background addon.""" try: addon = mybackgroundaddon(parent) addon.start() # store reference but return none (no ui) parent._my_background_addon = addon return none except exception as e: logging.error(f"failed to register background addon: {e}") return none ``` ### global hotkeys and text selection learn from the [smart floating assistant addon](/docs/smart-floater-example/ "complete addon example with full source code analysis"): ```python from pyside6.qtcore import qtimer import pyautogui import pyperclip class textselectionaddon: def __init__(self, gguf_app): self.gguf_app = gguf_app self.selected_text = "" # timer for checking text selection self.selection_timer = qtimer() self.selection_timer.timeout.connect(self.check_selection) self.selection_timer.start(500) # check every 500ms def check_selection(self): """check for text selection.""" try: # save current clipboard original_clipboard = pyperclip.paste() # copy selection pyautogui.hotkey('ctrl', 'c') # check if we got new text qtimer.singleshot(50, lambda: self.process_selection(original_clipboard)) except: pass def process_selection(self, original_clipboard): """process the selected text.""" try: current_text = pyperclip.paste() if current_text != original_clipboard and len(current_text.strip()) > 3: self.selected_text = current_text.strip() self.on_text_selected(self.selected_text) # restore clipboard pyperclip.copy(original_clipboard) except: pass def on_text_selected(self, text): """handle text selection event.""" # your custom logic here print(f"text selected: {{% raw %}}{text[:50]}{{% endraw %}}...") ``` ### model integration access and use the loaded gguf model: ```python def use_model_for_processing(self, text): """use the gguf model for text processing.""" model = self.get_model() if not model: return "no model loaded" try: # different processing modes response = model( f"analyze this text: {text}", max_tokens=300, temperature=0.7, top_p=0.9, repeat_penalty=1.1, stop=["", "human:", "user:"] ) return self.extract_response_text(response) except exception as e: return f"error: {str(e)}" def extract_response_text(self, response): """extract text from model response.""" if isinstance(response, dict) and 'choices' in response: return response['choices'][0].get('text', '').strip() elif isinstance(response, str): return response.strip() else: return str(response).strip() ``` ## 📋 addon best practices ### 1. error handling always wrap your code in try-catch blocks: ```python def safe_operation(self): try: # your code here pass except exception as e: self.logger.error(f"operation failed: {e}") return none ``` ### 2. resource cleanup implement proper cleanup: ```python def stop(self): """clean up addon resources.""" if hasattr(self, 'timer'): self.timer.stop() if hasattr(self, 'ui_components'): for component in self.ui_components: component.close() self.logger.info("addon stopped and cleaned up") ``` ### 3. configuration support user configuration: ```python import json import os class addonconfig: def __init__(self, addon_name): self.config_file = f"config/{addon_name}_config.json" self.default_config = { "enabled": true, "hotkey": "ctrl+shift+a", "auto_process": false } self.config = self.load_config() def load_config(self): try: if os.path.exists(self.config_file): with open(self.config_file, 'r') as f: return {**self.default_config, **json.load(f)} except: pass return self.default_config.copy() def save_config(self): os.makedirs(os.path.dirname(self.config_file), exist_ok=true) with open(self.config_file, 'w') as f: json.dump(self.config, f, indent=2) ``` ### 4. logging use proper logging: ```python import logging class myaddon: def __init__(self, gguf_app): self.logger = logging.getlogger(f"addon.{self.__class__.__name__}") self.logger.setlevel(logging.info) # log addon initialization self.logger.info("addon initialized") def process_data(self, data): self.logger.debug(f"processing data: {len(data)} items") try: # process data result = self.do_processing(data) self.logger.info("data processed successfully") return result except exception as e: self.logger.error(f"processing failed: {e}") raise ``` ## 🔧 testing your addon ### unit testing create tests for your addon: ```python # test_my_addon.py import unittest from unittest.mock import mock, magicmock from addons.my_awesome_addon.main import myawesomeaddon class testmyawesomeaddon(unittest.testcase): def setup(self): self.mock_gguf_app = mock() self.addon = myawesomeaddon(self.mock_gguf_app) def test_addon_initialization(self): self.assertisnotnone(self.addon) self.assertequal(self.addon.gguf_app, self.mock_gguf_app) def test_text_processing(self): # mock the model mock_model = mock() mock_model.return_value = "processed text" self.mock_gguf_app.model = mock_model result = self.addon.process_text_with_ai("test text") self.assertequal(result, "processed text") if __name__ == '__main__': unittest.main() ``` ### integration testing test with the actual gguf loader: ```python # test_integration.py def test_addon_with_gguf_loader(): """test addon integration with gguf loader.""" # this would be run with actual gguf loader instance pass ``` ## 📦 distributing your addon ### 1. create documentation create a `readme.md` for your addon: ```markdown # my awesome addon a powerful addon for gguf loader that provides [functionality]. ## features - feature 1 - feature 2 - feature 3 ## installation 1. copy the addon to `addons/my_awesome_addon/` 2. restart gguf loader 3. click on the addon in the sidebar ## configuration [configuration instructions] ## usage [usage instructions] ``` ### 2. version your addon use semantic versioning in `__init__.py`: ```python __version__ = "1.0.0" # major.minor.patch ``` ### 3. share with community - create a github repository - add installation instructions - include screenshots and examples - submit to the community addon registry 💡 **get featured**: outstanding addons may be featured on our [homepage community showcase](/#community "community addon highlights") - share your creation with the world! ## 🤝 contributing to core want to contribute to gguf loader itself? check out our [contributing guide](/docs/contributing/ "guidelines for contributing to gguf loader development"). ## 📚 additional resources - [addon api reference](/docs/addon-api/ "complete api documentation for addon developers") - complete api documentation - [smart floater example](/docs/smart-floater-example/ "real-world addon example with detailed code analysis") - learn from the built-in addon - [package structure](/docs/package-structure/ "understanding gguf loader's architecture") - technical architecture details - [installation guide](/docs/installation/ "setup development environment") - development environment setup --- ## 🎉 you're ready to build amazing addons! congratulations on completing the addon development guide! you now have the knowledge and tools to create powerful extensions for gguf loader. ### what you've learned - ✅ addon architecture and structure - ✅ creating ui components and background services - ✅ integrating with ai models - ✅ best practices and testing strategies - ✅ distribution and community sharing ### your next steps as an addon developer #### 🚀 **start building** - **create your first addon** - put your knowledge into practice with a simple project - **[study real examples](/#features "community addon showcase")** - see how others have implemented creative solutions - **[join developer discussions](/#community "developer community")** - connect with other addon developers #### 🌟 **share and grow** - **[showcase your work](/#community "community highlights")** - get your addon featured on our homepage - **[contribute to core](/docs/contributing/ "help improve gguf loader")** - help improve gguf loader itself - **[mentor others](/#community "help newcomers")** - share your expertise with new developers #### 🔥 **advanced challenges** - **build complex integrations** - create addons that work with external services - **optimize performance** - learn advanced techniques for efficient addons - **create addon libraries** - build reusable components for the community ### 💡 get inspired visit our [homepage community section](/#community "see what's possible") to see what other developers have created and get inspiration for your next project! **happy addon development! 🎉** need help? join our [community discussions](https://github.com/gguf-loader/gguf-loader/discussions) or contact us at support@ggufloader.com.
- learn how to create addons by studying the built-in smart floating assistant addon. this is a complete, real-world example that demonstrates all the key concepts of addon development. ## 📋 overview the smart floating assistant is gguf loader's flagship addon that provides: - **global text selection detection** across all applications - **floating button interface** that appears near selected text - **ai-powered text processing** (summarize and comment) - **seamless clipboard integration** - **privacy-first local processing** ## 🏗️ architecture ### file structure ``` addons/smart_floater/ ├── __init__.py # addon entry point ├── simple_main.py # main addon logic (simplified version) ├── main.py # full-featured version ├── floater_ui.py # ui components ├── comment_engine.py # text processing engine ├── injector.py # text injection utilities ├── error_handler.py # error handling ├── privacy_security.py # privacy and security features └── performance_optimizer.py # performance optimization ``` ### key components 1. **simplefloatingassistant**: main addon class 2. **smartfloaterstatuswidget**: control panel ui 3. **text selection monitor**: global text detection 4. **ai processing engine**: text summarization and commenting 5. **clipboard manager**: safe clipboard operations ## 🔍 code analysis ### entry point (`__init__.py`) ```python """ simple smart floating assistant shows a button when you select text, processes it with ai. that's it. """ # use the simple version instead of the complex one from .simple_main import register __all__ = ["register"] ``` **key lessons:** - keep the entry point simple - export only the `register` function - use clear, descriptive docstrings ### main logic (`simple_main.py`) let's break down the main addon class: ```python class simplefloatingassistant: """simple floating assistant that shows button on text selection.""" def __init__(self, gguf_app_instance: any): """initialize the addon with gguf loader reference.""" self.gguf_app = gguf_app_instance self._is_running = false self._floating_button = none self._popup_window = none self._selected_text = "" self.model = none # store model reference directly # initialize clipboard tracking try: self.last_clipboard = pyperclip.paste() except: self.last_clipboard = "" # button persistence tracking self.button_show_time = 0 self.button_should_stay = false # connect to model loading signals self.connect_to_model_signals() # timer to check for text selection self.timer = qtimer() self.timer.timeout.connect(self.check_selection) self.timer.start(300) # check every 300ms ``` **key lessons:** - store reference to main app (`gguf_app`) - initialize all state variables - connect to model loading signals - use qtimer for periodic tasks - handle initialization errors gracefully ### model integration ```python def connect_to_model_signals(self): """connect to model loading signals from the main app.""" try: # connect to the main app's model_loaded signal if hasattr(self.gguf_app, 'model_loaded'): self.gguf_app.model_loaded.connect(self.on_model_loaded) print("✅ connected to model_loaded signal") # also try to connect to ai_chat model_loaded signal if hasattr(self.gguf_app, 'ai_chat') and hasattr(self.gguf_app.ai_chat, 'model_loaded'): self.gguf_app.ai_chat.model_loaded.connect(self.on_model_loaded) print("✅ connected to ai_chat model_loaded signal") except exception as e: print(f"❌ error connecting to model signals: {e}") def on_model_loaded(self, model): """handle model loaded event.""" self.model = model print(f"✅ addon received model: {type(model)}") print(f" model methods: {{% raw %}}{[m for m in dir(model) if not m.startswith('_')][:10]}{{% endraw %}}") def get_model(self): """get the loaded model.""" try: # first try our stored model reference if self.model: print("✅ using stored model reference") return self.model # try multiple fallback methods if hasattr(self.gguf_app, 'model'): if self.gguf_app.model: self.model = self.gguf_app.model return self.gguf_app.model # ... more fallback methods return none except exception as e: print(f"❌ error getting model: {e}") return none ``` **key lessons:** - connect to model loading signals for real-time updates - implement multiple fallback methods for model access - store model reference locally for performance - use defensive programming with try-catch blocks - provide helpful debug output ### text selection detection ```python def check_selection(self): """check if text is currently selected (without copying).""" try: # save current clipboard content original_clipboard = pyperclip.paste() # temporarily copy selection to check if text is selected pyautogui.hotkey('ctrl', 'c') # small delay to let clipboard update qtimer.singleshot(50, lambda: self._process_selection_check(original_clipboard)) except: pass def _process_selection_check(self, original_clipboard): """process the selection check and restore clipboard.""" try: # get what was copied current_selection = pyperclip.paste() # check if we got new selected text if (current_selection != original_clipboard and current_selection and len(current_selection.strip()) > 3 and len(current_selection) < 5000): # we have selected text! if current_selection.strip() != self.selected_text: self.selected_text = current_selection.strip() self.show_button() self.button_show_time = 0 # reset timer self.button_should_stay = true else: # no text selected - but don't hide immediately if self.button_should_stay: self.button_show_time += 1 # hide after 10 checks (about 3 seconds) if self.button_show_time > 10: self.hide_button() self.button_should_stay = false self.button_show_time = 0 # always restore original clipboard immediately pyperclip.copy(original_clipboard) except: # always try to restore clipboard even if there's an error try: pyperclip.copy(original_clipboard) except: pass ``` **key lessons:** - use non-intrusive text selection detection - always restore the user's clipboard - implement smart button persistence (don't hide immediately) - handle edge cases (empty text, very long text) - use defensive programming for clipboard operations ### floating ui ```python def show_button(self): """show floating button near cursor.""" if self.button: self.button.close() self.button = qpushbutton("✨") self.button.setfixedsize(40, 40) self.button.setwindowflags(qt.tooltip | qt.framelesswindowhint | qt.windowstaysontophint) self.button.setstylesheet(""" qpushbutton { background-color: #0078d4; border: none; border-radius: 20px; color: white; font-size: 16px; } qpushbutton:hover { background-color: #106ebe; } """) # position near cursor pos = qcursor.pos() self.button.move(pos.x() + 10, pos.y() - 50) self.button.clicked.connect(self.show_popup) self.button.show() # reset persistence tracking self.button_show_time = 0 self.button_should_stay = true ``` **key lessons:** - use appropriate window flags for floating widgets - position relative to cursor for better ux - apply attractive styling with css - connect button clicks to actions - clean up previous instances before creating new ones ### ai text processing ```python def process_text(self, action): """process text with ai using gguf loader's model.""" try: model = self.get_model() if not model: self.result_area.settext("❌ error: no ai model loaded in gguf loader\n\nplease load a gguf model first!") return self.result_area.settext("🤖 processing with ai...") # create appropriate prompt based on action if action == "summarize": prompt = f"please provide a clear and concise summary of the following text:\n\n{self.selected_text}\n\nsummary:" else: # comment prompt = f"please write a thoughtful and insightful comment about the following text:\n\n{self.selected_text}\n\ncomment:" # process with gguf model using the same interface as aichat try: # use the model the same way as chatgenerator does response = model( prompt, max_tokens=300, stream=false, # don't stream for simplicity temperature=0.7, top_p=0.9, repeat_penalty=1.1, top_k=40, stop=["", "human:", "user:", "\n\n\n"] ) # extract text from response if isinstance(response, dict) and 'choices' in response: result_text = response['choices'][0].get('text', '').strip() elif isinstance(response, str): result_text = response.strip() else: result_text = str(response).strip() # clean up the result if result_text: # remove any prompt echoing if "summary:" in result_text: result_text = result_text.split("summary:")[-1].strip() elif "comment:" in result_text: result_text = result_text.split("comment:")[-1].strip() self.result_area.settext(result_text) self.copy_btn.setenabled(true) else: self.result_area.settext("❌ no response generated. try again.") except exception as e: self.result_area.settext(f"❌ error processing with ai model:\n{str(e)}\n\nmake sure a compatible gguf model is loaded.") except exception as e: self.result_area.settext(f"❌ unexpected error: {str(e)}") ``` **key lessons:** - check model availability before processing - create context-appropriate prompts - use consistent model parameters - handle different response formats - clean up ai responses (remove prompt echoing) - provide clear error messages to users ### status widget for addon panel ```python class smartfloaterstatuswidget: def __init__(self, addon_instance): from pyside6.qtwidgets import qwidget, qvboxlayout, qlabel, qpushbutton, qtextedit self.addon = addon_instance self.widget = qwidget() self.widget.setwindowtitle("smart floating assistant") layout = qvboxlayout(self.widget) # status info layout.addwidget(qlabel("🤖 smart floating assistant")) layout.addwidget(qlabel("status: running in background")) layout.addwidget(qlabel("")) layout.addwidget(qlabel("how to use:")) layout.addwidget(qlabel("1. select text anywhere on your screen")) layout.addwidget(qlabel("2. click the ✨ button that appears")) layout.addwidget(qlabel("3. choose summarize or comment")) layout.addwidget(qlabel("")) # test button test_btn = qpushbutton("🧪 test model connection") test_btn.clicked.connect(self.test_model) layout.addwidget(test_btn) # result area self.result_area = qtextedit() self.result_area.setmaximumheight(100) self.result_area.setreadonly(true) layout.addwidget(self.result_area) # stop/start buttons button_layout = qhboxlayout() stop_btn = qpushbutton("⏹️ stop") stop_btn.clicked.connect(self.stop_addon) button_layout.addwidget(stop_btn) start_btn = qpushbutton("▶️ start") start_btn.clicked.connect(self.start_addon) button_layout.addwidget(start_btn) layout.addlayout(button_layout) ``` **key lessons:** - create informative status widgets for addon management - provide clear usage instructions - include testing and control functionality - use emoji and clear labels for better ux - separate ui logic from core addon logic ### registration function ```python def register(parent=none): """register the simple floating assistant.""" try: print(f"🔧 register called with parent: {type(parent)}") # stop existing addon if running if hasattr(parent, '_simple_floater'): parent._simple_floater.stop() # create and start simple addon addon = simplefloatingassistant(parent) parent._simple_floater = addon print("✅ simple floating assistant started!") # return a status widget for the addon panel status_widget = smartfloaterstatuswidget(addon) return status_widget.widget except exception as e: print(f"❌ failed to start simple addon: {e}") return none ``` **key lessons:** - always handle cleanup of existing instances - store addon reference in parent for lifecycle management - return appropriate ui widget or none for background addons - provide clear success/failure feedback - use defensive programming with try-catch ## 🎯 best practices demonstrated ### 1. **defensive programming** - extensive use of try-catch blocks - graceful handling of missing dependencies - fallback methods for critical operations ### 2. **user experience** - non-intrusive text selection detection - smart button persistence (doesn't disappear immediately) - clear status messages and error handling - attractive, modern ui design ### 3. **performance optimization** - efficient timer-based monitoring - minimal clipboard interference - lazy loading of ui components - resource cleanup on shutdown ### 4. **integration patterns** - signal-based communication with main app - multiple fallback methods for model access - proper lifecycle management - clean separation of concerns ### 5. **error handling** - comprehensive error messages - graceful degradation when model unavailable - user-friendly error reporting - debug information for developers ## 🔧 customization examples ### adding new processing actions ```python def process_text(self, action): """extended processing with more actions.""" prompts = { "summarize": "please provide a clear and concise summary of: {text}", "comment": "please write a thoughtful comment about: {text}", "explain": "please explain this text in simple terms: {text}", "translate": "please translate this text to english: {text}", "improve": "please improve the writing of this text: {text}" } prompt_template = prompts.get(action, prompts["summarize"]) prompt = prompt_template.format(text=self.selected_text) # ... rest of processing logic ``` ### custom hotkeys ```python def setup_hotkeys(self): """setup custom hotkeys for the addon.""" try: import keyboard # register global hotkey for instant processing keyboard.add_hotkey('ctrl+shift+s', self.quick_summarize) keyboard.add_hotkey('ctrl+shift+c', self.quick_comment) except importerror: print("keyboard library not available for hotkeys") def quick_summarize(self): """quick summarize selected text without ui.""" # get current selection and process immediately pass ``` ### configuration support ```python def load_config(self): """load addon configuration.""" config_file = path.home() / ".ggufloader" / "smart_floater_config.json" default_config = { "check_interval": 300, "button_timeout": 3000, "max_text_length": 5000, "auto_copy_results": true } try: if config_file.exists(): with open(config_file) as f: user_config = json.load(f) return {**default_config, **user_config} except: pass return default_config ``` ## 📊 performance considerations ### memory management - clean up ui components properly - avoid memory leaks in timer callbacks - use weak references where appropriate ### cpu usage - optimize timer intervals - avoid blocking operations in main thread - use qtimer.singleshot for delayed operations ### system integration - minimize clipboard interference - respect user's workflow - handle system sleep/wake events ## 🧪 testing the smart floater ### manual testing checklist 1. **basic functionality** - [ ] addon loads without errors - [ ] status widget appears in sidebar - [ ] model connection test works 2. **text selection** - [ ] button appears when selecting text - [ ] button stays visible for appropriate time - [ ] works across different applications 3. **ai processing** - [ ] summarize function works correctly - [ ] comment function generates appropriate responses - [ ] error handling when no model loaded 4. **ui/ux** - [ ] floating button positioned correctly - [ ] popup window displays properly - [ ] copy functionality works ### automated testing ```python import unittest from unittest.mock import mock, patch class testsmartfloater(unittest.testcase): def setup(self): self.mock_gguf_app = mock() self.addon = simplefloatingassistant(self.mock_gguf_app) def test_model_connection(self): """test model connection and retrieval.""" mock_model = mock() self.mock_gguf_app.model = mock_model result = self.addon.get_model() self.assertequal(result, mock_model) @patch('pyperclip.paste') @patch('pyperclip.copy') def test_clipboard_operations(self, mock_copy, mock_paste): """test clipboard operations don't interfere.""" mock_paste.return_value = "original text" self.addon.check_selection() # verify clipboard was restored mock_copy.assert_called_with("original text") ``` ## 🚀 next steps after studying the smart floater example: 1. **create your own addon** using the patterns shown (follow the [addon development guide](/docs/addon-development/ "step-by-step addon creation tutorial")) 2. **experiment with modifications** to understand the code better 3. **read the full source code** in `addons/smart_floater/` (see [package structure](/docs/package-structure/ "understanding addon file organization")) 4. **reference the api documentation** ([addon api reference](/docs/addon-api/ "complete api documentation")) for detailed method signatures 5. **join the community** to share your addon ideas ## 📚 related documentation - [addon development guide](/docs/addon-development/ "complete tutorial for creating your own addons") - step-by-step development guide - [addon api reference](/docs/addon-api/ "detailed api documentation for addon developers") - complete api documentation - [package structure](/docs/package-structure/ "understanding how addons are integrated") - technical architecture - [quick start guide](/docs/quick-start/ "learn how to use the smart floater as an end user") - how to use the smart floater as an end user --- **the smart floater is a great example of what's possible with gguf loader addons. use it as inspiration for your own creations! 🎉** need help understanding any part of the code? join our [community discussions](https://github.com/gguf-loader/gguf-loader/discussions) or contact support@ggufloader.com.
- this document explains the structure of the gguf loader 2.0.0 pypi package and how the smart floating assistant addon is included. ## 📦 package overview **package name**: `ggufloader` **version**: `2.0.0` **command**: `ggufloader` when users install with `pip install ggufloader`, they get: - complete gguf loader application - smart floating assistant addon (pre-installed) - comprehensive documentation - all necessary dependencies ## 🏗️ directory structure ``` ggufloader/ ├── pyproject.toml # package configuration ├── readme_pypi.md # pypi package description ├── build_pypi.py # build script for pypi ├── requirements.txt # dependencies ├── ├── # main application files ├── main.py # basic gguf loader (no addons) ├── gguf_loader_main.py # gguf loader with addon support ├── addon_manager.py # addon management system ├── config.py # configuration ├── utils.py # utilities ├── icon.ico # application icon ├── ├── # ui components ├── ui/ │ ├── ai_chat_window.py # main chat interface │ └── apply_style.py # ui styling ├── ├── # core models ├── models/ │ ├── model_loader.py # gguf model loading │ └── chat_generator.py # text generation ├── ├── # ui mixins ├── mixins/ │ ├── ui_setup_mixin.py # ui setup │ ├── model_handler_mixin.py # model handling │ ├── chat_handler_mixin.py # chat functionality │ ├── event_handler_mixin.py # event handling │ └── utils_mixin.py # utility functions ├── ├── # widgets ├── widgets/ │ └── chat_bubble.py # chat bubble component ├── ├── # pre-installed addons ├── addons/ │ └── smart_floater/ # smart floating assistant │ ├── __init__.py # addon entry point │ ├── simple_main.py # main addon logic │ ├── main.py # full-featured version │ ├── floater_ui.py # ui components │ ├── comment_engine.py # text processing │ ├── injector.py # text injection │ ├── error_handler.py # error handling │ ├── privacy_security.py # privacy features │ └── performance_optimizer.py # performance ├── └── # documentation └── docs/ ├── readme.md # documentation index ├── installation.md # installation guide ├── quick-start.md # quick start guide ├── user-guide.md # complete user manual ├── addon-development.md # addon development guide ├── addon-api.md # api reference ├── smart-floater-example.md # example addon ├── configuration.md # configuration guide ├── troubleshooting.md # troubleshooting ├── contributing.md # contributing guide └── package-structure.md # this file ``` ## 🚀 installation and usage ### installation ```bash pip install ggufloader ``` ### launch application ```bash ggufloader ``` this command launches `gguf_loader_main.py` which includes: - full gguf loader functionality - smart floating assistant addon (automatically loaded) - addon management system - all ui components ## 🔧 how addons are included ### addon discovery when gguf loader starts, the `addonmanager` automatically: 1. **scans** the `addons/` directory 2. **finds** folders with `__init__.py` files 3. **loads** addons by calling their `register()` function 4. **displays** addon buttons in the sidebar ### smart floater integration the smart floating assistant is included as a pre-installed addon: ```python # addons/smart_floater/__init__.py from .simple_main import register __all__ = ["register"] # when loaded, it provides: # - global text selection detection # - floating button interface # - ai text processing (summarize/comment) # - seamless clipboard integration ``` ### addon lifecycle 1. **package installation**: addon files are installed with the package 2. **application start**: `addonmanager` discovers and loads addons 3. **user interaction**: users can access addons via the sidebar 4. **background operation**: smart floater runs continuously in background ## 📋 package configuration ### pyproject.toml key sections ```toml [project] name = "ggufloader" version = "2.0.0" dependencies = [ "llama-cpp-python>=0.2.72", "pyside6>=6.6.1", "pyautogui>=0.9.54", "pyperclip>=1.8.2", "pywin32>=306; sys_platform == 'win32'", ] [project.scripts] ggufloader = "gguf_loader.gguf_loader_main:main" [tool.setuptools] packages = [ "gguf_loader", "gguf_loader.addons", "gguf_loader.addons.smart_floater" ] include-package-data = true ``` ### package data inclusion all necessary files are included: - python source code - documentation (`.md` files) - icons and images - configuration files - addon files ## 🎯 user experience ### first-time users 1. **install**: `pip install ggufloader` 2. **launch**: `ggufloader` 3. **load model**: click "select gguf model" 4. **use smart floater**: select text anywhere → click ✨ button ### addon discovery - smart floater appears in addon sidebar automatically - users can click to open control panel - no additional installation required - works immediately after model loading ### documentation access users can access documentation: - online: github repository - locally: installed with package in `docs/` folder - in-app: help links and tooltips ## 🔄 version updates ### updating the package when releasing new versions: 1. **update version** in `pyproject.toml` 2. **update changelog** and documentation 3. **test addon compatibility** 4. **build and upload** to pypi ### addon updates smart floater updates are included in package updates: - bug fixes and improvements - new features and capabilities - performance optimizations - security enhancements ## 🛠️ development workflow ### for package maintainers 1. **develop** new features and addons 2. **test** thoroughly with various models 3. **update** documentation 4. **build** package with `python build_pypi.py` 5. **upload** to pypi ### for addon developers 1. **study** the [smart floater example](/docs/smart-floater-example/ "complete addon example with detailed code analysis") 2. **follow** the [addon development guide](/docs/addon-development/ "step-by-step addon creation tutorial") 3. **reference** the [api documentation](/docs/addon-api/ "complete api reference for developers") 4. **create** addons in `addons/` directory 5. **test** with gguf loader (see [quick start guide](/docs/quick-start/ "getting started tutorial")) 6. **share** with community ## 📊 package statistics ### size and dependencies - **package size**: ~50mb (includes all dependencies) - **core dependencies**: 5 main packages - **optional dependencies**: gpu acceleration packages - **documentation**: 10+ comprehensive guides ### compatibility - **python**: 3.8+ (tested on 3.8, 3.9, 3.10, 3.11, 3.12) - **operating systems**: windows, macos, linux - **architectures**: x86_64, arm64 (apple silicon) ## 🔍 troubleshooting package issues ### common installation issues 1. **python version**: ensure python 3.8+ 2. **dependencies**: install build tools if needed 3. **permissions**: use `--user` flag if needed 4. **virtual environment**: recommended for isolation ### addon loading issues 1. **check logs**: look for addon loading errors 2. **verify structure**: ensure `__init__.py` exists 3. **dependencies**: check addon-specific requirements 4. **permissions**: verify file permissions ### getting help - **documentation**: check `docs/` folder - **github issues**: report bugs and issues - **community**: join discussions and forums - **support**: contact support@ggufloader.com ## 🎉 success metrics the package structure is designed to provide: - **easy installation**: single `pip install` command - **immediate functionality**: smart floater works out of the box - **extensibility**: clear addon development path - **maintainability**: well-organized codebase - **user-friendly**: comprehensive documentation --- **this package structure ensures that gguf loader 2.0.0 provides a complete, professional ai text processing solution with the smart floating assistant included by default! 🚀**
🔗 related features
🎯 what's next?
you've completed this guide! here are some suggested next steps to continue your gguf loader journey:
🚀share your addon
built something awesome? share it with the community and get featured on our homepage.
share with community →📖🤝🌟 explore more features
🏠 back to homepage
- learn how to create addons by studying the built-in smart floating assistant addon. this is a complete, real-world example that demonstrates all the key concepts of addon development. ## 📋 overview the smart floating assistant is gguf loader's flagship addon that provides: - **global text selection detection** across all applications - **floating button interface** that appears near selected text - **ai-powered text processing** (summarize and comment) - **seamless clipboard integration** - **privacy-first local processing** ## 🏗️ architecture ### file structure ``` addons/smart_floater/ ├── __init__.py # addon entry point ├── simple_main.py # main addon logic (simplified version) ├── main.py # full-featured version ├── floater_ui.py # ui components ├── comment_engine.py # text processing engine ├── injector.py # text injection utilities ├── error_handler.py # error handling ├── privacy_security.py # privacy and security features └── performance_optimizer.py # performance optimization ``` ### key components 1. **simplefloatingassistant**: main addon class 2. **smartfloaterstatuswidget**: control panel ui 3. **text selection monitor**: global text detection 4. **ai processing engine**: text summarization and commenting 5. **clipboard manager**: safe clipboard operations ## 🔍 code analysis ### entry point (`__init__.py`) ```python """ simple smart floating assistant shows a button when you select text, processes it with ai. that's it. """ # use the simple version instead of the complex one from .simple_main import register __all__ = ["register"] ``` **key lessons:** - keep the entry point simple - export only the `register` function - use clear, descriptive docstrings ### main logic (`simple_main.py`) let's break down the main addon class: ```python class simplefloatingassistant: """simple floating assistant that shows button on text selection.""" def __init__(self, gguf_app_instance: any): """initialize the addon with gguf loader reference.""" self.gguf_app = gguf_app_instance self._is_running = false self._floating_button = none self._popup_window = none self._selected_text = "" self.model = none # store model reference directly # initialize clipboard tracking try: self.last_clipboard = pyperclip.paste() except: self.last_clipboard = "" # button persistence tracking self.button_show_time = 0 self.button_should_stay = false # connect to model loading signals self.connect_to_model_signals() # timer to check for text selection self.timer = qtimer() self.timer.timeout.connect(self.check_selection) self.timer.start(300) # check every 300ms ``` **key lessons:** - store reference to main app (`gguf_app`) - initialize all state variables - connect to model loading signals - use qtimer for periodic tasks - handle initialization errors gracefully ### model integration ```python def connect_to_model_signals(self): """connect to model loading signals from the main app.""" try: # connect to the main app's model_loaded signal if hasattr(self.gguf_app, 'model_loaded'): self.gguf_app.model_loaded.connect(self.on_model_loaded) print("✅ connected to model_loaded signal") # also try to connect to ai_chat model_loaded signal if hasattr(self.gguf_app, 'ai_chat') and hasattr(self.gguf_app.ai_chat, 'model_loaded'): self.gguf_app.ai_chat.model_loaded.connect(self.on_model_loaded) print("✅ connected to ai_chat model_loaded signal") except exception as e: print(f"❌ error connecting to model signals: {e}") def on_model_loaded(self, model): """handle model loaded event.""" self.model = model print(f"✅ addon received model: {type(model)}") print(f" model methods: {{% raw %}}{[m for m in dir(model) if not m.startswith('_')][:10]}{{% endraw %}}") def get_model(self): """get the loaded model.""" try: # first try our stored model reference if self.model: print("✅ using stored model reference") return self.model # try multiple fallback methods if hasattr(self.gguf_app, 'model'): if self.gguf_app.model: self.model = self.gguf_app.model return self.gguf_app.model # ... more fallback methods return none except exception as e: print(f"❌ error getting model: {e}") return none ``` **key lessons:** - connect to model loading signals for real-time updates - implement multiple fallback methods for model access - store model reference locally for performance - use defensive programming with try-catch blocks - provide helpful debug output ### text selection detection ```python def check_selection(self): """check if text is currently selected (without copying).""" try: # save current clipboard content original_clipboard = pyperclip.paste() # temporarily copy selection to check if text is selected pyautogui.hotkey('ctrl', 'c') # small delay to let clipboard update qtimer.singleshot(50, lambda: self._process_selection_check(original_clipboard)) except: pass def _process_selection_check(self, original_clipboard): """process the selection check and restore clipboard.""" try: # get what was copied current_selection = pyperclip.paste() # check if we got new selected text if (current_selection != original_clipboard and current_selection and len(current_selection.strip()) > 3 and len(current_selection) < 5000): # we have selected text! if current_selection.strip() != self.selected_text: self.selected_text = current_selection.strip() self.show_button() self.button_show_time = 0 # reset timer self.button_should_stay = true else: # no text selected - but don't hide immediately if self.button_should_stay: self.button_show_time += 1 # hide after 10 checks (about 3 seconds) if self.button_show_time > 10: self.hide_button() self.button_should_stay = false self.button_show_time = 0 # always restore original clipboard immediately pyperclip.copy(original_clipboard) except: # always try to restore clipboard even if there's an error try: pyperclip.copy(original_clipboard) except: pass ``` **key lessons:** - use non-intrusive text selection detection - always restore the user's clipboard - implement smart button persistence (don't hide immediately) - handle edge cases (empty text, very long text) - use defensive programming for clipboard operations ### floating ui ```python def show_button(self): """show floating button near cursor.""" if self.button: self.button.close() self.button = qpushbutton("✨") self.button.setfixedsize(40, 40) self.button.setwindowflags(qt.tooltip | qt.framelesswindowhint | qt.windowstaysontophint) self.button.setstylesheet(""" qpushbutton { background-color: #0078d4; border: none; border-radius: 20px; color: white; font-size: 16px; } qpushbutton:hover { background-color: #106ebe; } """) # position near cursor pos = qcursor.pos() self.button.move(pos.x() + 10, pos.y() - 50) self.button.clicked.connect(self.show_popup) self.button.show() # reset persistence tracking self.button_show_time = 0 self.button_should_stay = true ``` **key lessons:** - use appropriate window flags for floating widgets - position relative to cursor for better ux - apply attractive styling with css - connect button clicks to actions - clean up previous instances before creating new ones ### ai text processing ```python def process_text(self, action): """process text with ai using gguf loader's model.""" try: model = self.get_model() if not model: self.result_area.settext("❌ error: no ai model loaded in gguf loader\n\nplease load a gguf model first!") return self.result_area.settext("🤖 processing with ai...") # create appropriate prompt based on action if action == "summarize": prompt = f"please provide a clear and concise summary of the following text:\n\n{self.selected_text}\n\nsummary:" else: # comment prompt = f"please write a thoughtful and insightful comment about the following text:\n\n{self.selected_text}\n\ncomment:" # process with gguf model using the same interface as aichat try: # use the model the same way as chatgenerator does response = model( prompt, max_tokens=300, stream=false, # don't stream for simplicity temperature=0.7, top_p=0.9, repeat_penalty=1.1, top_k=40, stop=["", "human:", "user:", "\n\n\n"] ) # extract text from response if isinstance(response, dict) and 'choices' in response: result_text = response['choices'][0].get('text', '').strip() elif isinstance(response, str): result_text = response.strip() else: result_text = str(response).strip() # clean up the result if result_text: # remove any prompt echoing if "summary:" in result_text: result_text = result_text.split("summary:")[-1].strip() elif "comment:" in result_text: result_text = result_text.split("comment:")[-1].strip() self.result_area.settext(result_text) self.copy_btn.setenabled(true) else: self.result_area.settext("❌ no response generated. try again.") except exception as e: self.result_area.settext(f"❌ error processing with ai model:\n{str(e)}\n\nmake sure a compatible gguf model is loaded.") except exception as e: self.result_area.settext(f"❌ unexpected error: {str(e)}") ``` **key lessons:** - check model availability before processing - create context-appropriate prompts - use consistent model parameters - handle different response formats - clean up ai responses (remove prompt echoing) - provide clear error messages to users ### status widget for addon panel ```python class smartfloaterstatuswidget: def __init__(self, addon_instance): from pyside6.qtwidgets import qwidget, qvboxlayout, qlabel, qpushbutton, qtextedit self.addon = addon_instance self.widget = qwidget() self.widget.setwindowtitle("smart floating assistant") layout = qvboxlayout(self.widget) # status info layout.addwidget(qlabel("🤖 smart floating assistant")) layout.addwidget(qlabel("status: running in background")) layout.addwidget(qlabel("")) layout.addwidget(qlabel("how to use:")) layout.addwidget(qlabel("1. select text anywhere on your screen")) layout.addwidget(qlabel("2. click the ✨ button that appears")) layout.addwidget(qlabel("3. choose summarize or comment")) layout.addwidget(qlabel("")) # test button test_btn = qpushbutton("🧪 test model connection") test_btn.clicked.connect(self.test_model) layout.addwidget(test_btn) # result area self.result_area = qtextedit() self.result_area.setmaximumheight(100) self.result_area.setreadonly(true) layout.addwidget(self.result_area) # stop/start buttons button_layout = qhboxlayout() stop_btn = qpushbutton("⏹️ stop") stop_btn.clicked.connect(self.stop_addon) button_layout.addwidget(stop_btn) start_btn = qpushbutton("▶️ start") start_btn.clicked.connect(self.start_addon) button_layout.addwidget(start_btn) layout.addlayout(button_layout) ``` **key lessons:** - create informative status widgets for addon management - provide clear usage instructions - include testing and control functionality - use emoji and clear labels for better ux - separate ui logic from core addon logic ### registration function ```python def register(parent=none): """register the simple floating assistant.""" try: print(f"🔧 register called with parent: {type(parent)}") # stop existing addon if running if hasattr(parent, '_simple_floater'): parent._simple_floater.stop() # create and start simple addon addon = simplefloatingassistant(parent) parent._simple_floater = addon print("✅ simple floating assistant started!") # return a status widget for the addon panel status_widget = smartfloaterstatuswidget(addon) return status_widget.widget except exception as e: print(f"❌ failed to start simple addon: {e}") return none ``` **key lessons:** - always handle cleanup of existing instances - store addon reference in parent for lifecycle management - return appropriate ui widget or none for background addons - provide clear success/failure feedback - use defensive programming with try-catch ## 🎯 best practices demonstrated ### 1. **defensive programming** - extensive use of try-catch blocks - graceful handling of missing dependencies - fallback methods for critical operations ### 2. **user experience** - non-intrusive text selection detection - smart button persistence (doesn't disappear immediately) - clear status messages and error handling - attractive, modern ui design ### 3. **performance optimization** - efficient timer-based monitoring - minimal clipboard interference - lazy loading of ui components - resource cleanup on shutdown ### 4. **integration patterns** - signal-based communication with main app - multiple fallback methods for model access - proper lifecycle management - clean separation of concerns ### 5. **error handling** - comprehensive error messages - graceful degradation when model unavailable - user-friendly error reporting - debug information for developers ## 🔧 customization examples ### adding new processing actions ```python def process_text(self, action): """extended processing with more actions.""" prompts = { "summarize": "please provide a clear and concise summary of: {text}", "comment": "please write a thoughtful comment about: {text}", "explain": "please explain this text in simple terms: {text}", "translate": "please translate this text to english: {text}", "improve": "please improve the writing of this text: {text}" } prompt_template = prompts.get(action, prompts["summarize"]) prompt = prompt_template.format(text=self.selected_text) # ... rest of processing logic ``` ### custom hotkeys ```python def setup_hotkeys(self): """setup custom hotkeys for the addon.""" try: import keyboard # register global hotkey for instant processing keyboard.add_hotkey('ctrl+shift+s', self.quick_summarize) keyboard.add_hotkey('ctrl+shift+c', self.quick_comment) except importerror: print("keyboard library not available for hotkeys") def quick_summarize(self): """quick summarize selected text without ui.""" # get current selection and process immediately pass ``` ### configuration support ```python def load_config(self): """load addon configuration.""" config_file = path.home() / ".ggufloader" / "smart_floater_config.json" default_config = { "check_interval": 300, "button_timeout": 3000, "max_text_length": 5000, "auto_copy_results": true } try: if config_file.exists(): with open(config_file) as f: user_config = json.load(f) return {**default_config, **user_config} except: pass return default_config ``` ## 📊 performance considerations ### memory management - clean up ui components properly - avoid memory leaks in timer callbacks - use weak references where appropriate ### cpu usage - optimize timer intervals - avoid blocking operations in main thread - use qtimer.singleshot for delayed operations ### system integration - minimize clipboard interference - respect user's workflow - handle system sleep/wake events ## 🧪 testing the smart floater ### manual testing checklist 1. **basic functionality** - [ ] addon loads without errors - [ ] status widget appears in sidebar - [ ] model connection test works 2. **text selection** - [ ] button appears when selecting text - [ ] button stays visible for appropriate time - [ ] works across different applications 3. **ai processing** - [ ] summarize function works correctly - [ ] comment function generates appropriate responses - [ ] error handling when no model loaded 4. **ui/ux** - [ ] floating button positioned correctly - [ ] popup window displays properly - [ ] copy functionality works ### automated testing ```python import unittest from unittest.mock import mock, patch class testsmartfloater(unittest.testcase): def setup(self): self.mock_gguf_app = mock() self.addon = simplefloatingassistant(self.mock_gguf_app) def test_model_connection(self): """test model connection and retrieval.""" mock_model = mock() self.mock_gguf_app.model = mock_model result = self.addon.get_model() self.assertequal(result, mock_model) @patch('pyperclip.paste') @patch('pyperclip.copy') def test_clipboard_operations(self, mock_copy, mock_paste): """test clipboard operations don't interfere.""" mock_paste.return_value = "original text" self.addon.check_selection() # verify clipboard was restored mock_copy.assert_called_with("original text") ``` ## 🚀 next steps after studying the smart floater example: 1. **create your own addon** using the patterns shown (follow the [addon development guide](/docs/addon-development/ "step-by-step addon creation tutorial")) 2. **experiment with modifications** to understand the code better 3. **read the full source code** in `addons/smart_floater/` (see [package structure](/docs/package-structure/ "understanding addon file organization")) 4. **reference the api documentation** ([addon api reference](/docs/addon-api/ "complete api documentation")) for detailed method signatures 5. **join the community** to share your addon ideas ## 📚 related documentation - [addon development guide](/docs/addon-development/ "complete tutorial for creating your own addons") - step-by-step development guide - [addon api reference](/docs/addon-api/ "detailed api documentation for addon developers") - complete api documentation - [package structure](/docs/package-structure/ "understanding how addons are integrated") - technical architecture - [quick start guide](/docs/quick-start/ "learn how to use the smart floater as an end user") - how to use the smart floater as an end user --- **the smart floater is a great example of what's possible with gguf loader addons. use it as inspiration for your own creations! 🎉** need help understanding any part of the code? join our [community discussions](https://github.com/gguf-loader/gguf-loader/discussions) or contact support@ggufloader.com.
- this document explains the structure of the gguf loader 2.0.0 pypi package and how the smart floating assistant addon is included. ## 📦 package overview **package name**: `ggufloader` **version**: `2.0.0` **command**: `ggufloader` when users install with `pip install ggufloader`, they get: - complete gguf loader application - smart floating assistant addon (pre-installed) - comprehensive documentation - all necessary dependencies ## 🏗️ directory structure ``` ggufloader/ ├── pyproject.toml # package configuration ├── readme_pypi.md # pypi package description ├── build_pypi.py # build script for pypi ├── requirements.txt # dependencies ├── ├── # main application files ├── main.py # basic gguf loader (no addons) ├── gguf_loader_main.py # gguf loader with addon support ├── addon_manager.py # addon management system ├── config.py # configuration ├── utils.py # utilities ├── icon.ico # application icon ├── ├── # ui components ├── ui/ │ ├── ai_chat_window.py # main chat interface │ └── apply_style.py # ui styling ├── ├── # core models ├── models/ │ ├── model_loader.py # gguf model loading │ └── chat_generator.py # text generation ├── ├── # ui mixins ├── mixins/ │ ├── ui_setup_mixin.py # ui setup │ ├── model_handler_mixin.py # model handling │ ├── chat_handler_mixin.py # chat functionality │ ├── event_handler_mixin.py # event handling │ └── utils_mixin.py # utility functions ├── ├── # widgets ├── widgets/ │ └── chat_bubble.py # chat bubble component ├── ├── # pre-installed addons ├── addons/ │ └── smart_floater/ # smart floating assistant │ ├── __init__.py # addon entry point │ ├── simple_main.py # main addon logic │ ├── main.py # full-featured version │ ├── floater_ui.py # ui components │ ├── comment_engine.py # text processing │ ├── injector.py # text injection │ ├── error_handler.py # error handling │ ├── privacy_security.py # privacy features │ └── performance_optimizer.py # performance ├── └── # documentation └── docs/ ├── readme.md # documentation index ├── installation.md # installation guide ├── quick-start.md # quick start guide ├── user-guide.md # complete user manual ├── addon-development.md # addon development guide ├── addon-api.md # api reference ├── smart-floater-example.md # example addon ├── configuration.md # configuration guide ├── troubleshooting.md # troubleshooting ├── contributing.md # contributing guide └── package-structure.md # this file ``` ## 🚀 installation and usage ### installation ```bash pip install ggufloader ``` ### launch application ```bash ggufloader ``` this command launches `gguf_loader_main.py` which includes: - full gguf loader functionality - smart floating assistant addon (automatically loaded) - addon management system - all ui components ## 🔧 how addons are included ### addon discovery when gguf loader starts, the `addonmanager` automatically: 1. **scans** the `addons/` directory 2. **finds** folders with `__init__.py` files 3. **loads** addons by calling their `register()` function 4. **displays** addon buttons in the sidebar ### smart floater integration the smart floating assistant is included as a pre-installed addon: ```python # addons/smart_floater/__init__.py from .simple_main import register __all__ = ["register"] # when loaded, it provides: # - global text selection detection # - floating button interface # - ai text processing (summarize/comment) # - seamless clipboard integration ``` ### addon lifecycle 1. **package installation**: addon files are installed with the package 2. **application start**: `addonmanager` discovers and loads addons 3. **user interaction**: users can access addons via the sidebar 4. **background operation**: smart floater runs continuously in background ## 📋 package configuration ### pyproject.toml key sections ```toml [project] name = "ggufloader" version = "2.0.0" dependencies = [ "llama-cpp-python>=0.2.72", "pyside6>=6.6.1", "pyautogui>=0.9.54", "pyperclip>=1.8.2", "pywin32>=306; sys_platform == 'win32'", ] [project.scripts] ggufloader = "gguf_loader.gguf_loader_main:main" [tool.setuptools] packages = [ "gguf_loader", "gguf_loader.addons", "gguf_loader.addons.smart_floater" ] include-package-data = true ``` ### package data inclusion all necessary files are included: - python source code - documentation (`.md` files) - icons and images - configuration files - addon files ## 🎯 user experience ### first-time users 1. **install**: `pip install ggufloader` 2. **launch**: `ggufloader` 3. **load model**: click "select gguf model" 4. **use smart floater**: select text anywhere → click ✨ button ### addon discovery - smart floater appears in addon sidebar automatically - users can click to open control panel - no additional installation required - works immediately after model loading ### documentation access users can access documentation: - online: github repository - locally: installed with package in `docs/` folder - in-app: help links and tooltips ## 🔄 version updates ### updating the package when releasing new versions: 1. **update version** in `pyproject.toml` 2. **update changelog** and documentation 3. **test addon compatibility** 4. **build and upload** to pypi ### addon updates smart floater updates are included in package updates: - bug fixes and improvements - new features and capabilities - performance optimizations - security enhancements ## 🛠️ development workflow ### for package maintainers 1. **develop** new features and addons 2. **test** thoroughly with various models 3. **update** documentation 4. **build** package with `python build_pypi.py` 5. **upload** to pypi ### for addon developers 1. **study** the [smart floater example](/docs/smart-floater-example/ "complete addon example with detailed code analysis") 2. **follow** the [addon development guide](/docs/addon-development/ "step-by-step addon creation tutorial") 3. **reference** the [api documentation](/docs/addon-api/ "complete api reference for developers") 4. **create** addons in `addons/` directory 5. **test** with gguf loader (see [quick start guide](/docs/quick-start/ "getting started tutorial")) 6. **share** with community ## 📊 package statistics ### size and dependencies - **package size**: ~50mb (includes all dependencies) - **core dependencies**: 5 main packages - **optional dependencies**: gpu acceleration packages - **documentation**: 10+ comprehensive guides ### compatibility - **python**: 3.8+ (tested on 3.8, 3.9, 3.10, 3.11, 3.12) - **operating systems**: windows, macos, linux - **architectures**: x86_64, arm64 (apple silicon) ## 🔍 troubleshooting package issues ### common installation issues 1. **python version**: ensure python 3.8+ 2. **dependencies**: install build tools if needed 3. **permissions**: use `--user` flag if needed 4. **virtual environment**: recommended for isolation ### addon loading issues 1. **check logs**: look for addon loading errors 2. **verify structure**: ensure `__init__.py` exists 3. **dependencies**: check addon-specific requirements 4. **permissions**: verify file permissions ### getting help - **documentation**: check `docs/` folder - **github issues**: report bugs and issues - **community**: join discussions and forums - **support**: contact support@ggufloader.com ## 🎉 success metrics the package structure is designed to provide: - **easy installation**: single `pip install` command - **immediate functionality**: smart floater works out of the box - **extensibility**: clear addon development path - **maintainability**: well-organized codebase - **user-friendly**: comprehensive documentation --- **this package structure ensures that gguf loader 2.0.0 provides a complete, professional ai text processing solution with the smart floating assistant included by default! 🚀**
🔗 related features
🎯 what's next?
you've completed this guide! here are some suggested next steps to continue your gguf loader journey:
🚀share your addon
built something awesome? share it with the community and get featured on our homepage.
share with community →📖🤝🌟 explore more features
🏠 back to homepage
🔗 Related Features
🎯 What's Next?
You've completed this guide! Here are some suggested next steps to continue your GGUF Loader journey:
Create Your First Addon
Ready to extend GGUF Loader? Learn how to create custom addons with our development guide.
Addon Development →Join the Community
Connect with other GGUF Loader users, share your projects, and get help.
Community Hub →Advanced Tutorials
Take your skills to the next level with advanced how-to guides and tutorials.
How-To Guides →