AI Computation Layer
The AI Computation Layer is the brain of rep.fun, enabling users to ask questions, analyze data, or manage assets with complete privacy. Its standout feature is the integration of Large Language Models (LLMs) alongside task-specific AI models, all running securely within TEEs. LLMs, like those powering advanced chatbots, allow users to submit natural language queries and receive detailed, context-aware responses, while smaller “micro-engines” handle specialized tasks like medical diagnostics or portfolio optimization. These models are encrypted and stored in a decentralized marketplace, only activated inside a TEE to protect sensitive inputs and outputs.
LLMs are fine-tuned for efficiency, using quantized weights (e.g., 4-bit precision) to reduce memory needs, making them suitable for TEE environments.
Data inputs, such as user queries or financial records, are encrypted with a secure AES algorithm before processing, ensuring no leaks.
The layer supports multi-language inputs, enabling global users to interact naturally with the AI.
This setup ensures even complex queries, like analyzing medical data or crafting DeFi strategies, stay private, with results delivered securely to the user.
Last updated