Beta Release
Our biggest milestone yet -- PocketAI enters public beta with hybrid intelligence and
multi-agent capabilities.
- New Hybrid cloud/on-device architecture that seamlessly routes
queries based on complexity and connectivity
- New Multi-agent system beta enabling collaborative AI agents for
complex multi-step tasks
- New Voice mode for premium users with real-time speech-to-text and
text-to-speech on-device
- Improved 1,500+ beta testers onboarded with structured feedback
pipeline and analytics dashboard
Performance Update
Major inference speed improvements and power efficiency gains across flagship Android
devices.
- Improved GPU optimization improvements across all supported
chipsets with dynamic workload balancing
- Improved 40% faster inference on Snapdragon 8 Gen 3 using
Qualcomm AI Engine Direct integration
- Fixed Battery usage reduced by 25% through intelligent power-state
management and batch scheduling
- New New model: Phi-3 Mini optimized for on-device deployment with
4-bit quantization
Agentic AI Preview
Introducing local AI agents that can take action on your behalf -- all running
privately on-device.
- New Local AI agents for task automation including file management,
app control, and web research
- New Calendar and email integration with natural language scheduling
and smart reply generation
- New Multi-step reasoning engine that breaks complex requests into
executable sub-tasks
- Improved Tested on 1,200+ devices across 47 different hardware
configurations for maximum compatibility
Foundation Release
Where it all began. The core engine that proves powerful AI can run entirely on your
phone.
- New Core on-device inference engine built on GGML with custom memory
management for mobile constraints
- New Basic chat interface with conversation history, context window
management, and markdown rendering
- New Offline-first architecture ensuring full functionality without
any network connectivity
- New Support for Llama 3.2 1B and 3B models with optimized 4-bit and
8-bit quantization variants