3.2 Federated fine-tuning of AI models
70 to 80% of customer journeys and business workflows are predicted to be replaced by AI workflows [27-30]. HumanChain can provide data that is broader in scope and greater in variety, improved quality and validity, cleaner, more accurate, and permissioned. Due to HumanChain’s approach of combining identity and data layers, new capabilities emerge where base or master or foundational models which have achieved global maxima of performance across wider dataset for specific use cases can be further fine-tuned on aggregated data or consented customer specific data to provide customised or personalised experiences on the edge device. Since the identity and data layer including permissions are stacked alongside each other, data from across applications can be used to enhance the individual experience of users across the ecosystem.
How it works:
Central Server Distributes Model: A central server distributes a pre-trained AI model (like a large language model) to participating devices.
Local Training: Each device trains the model locally on its own private data. This avoids sharing sensitive data with the central server.
Model Updates Shared: The devices only share the updates to the model parameters, not the raw data itself.
Central Server Aggregates: The central server aggregates these model updates to refine the overall model.
Iteration: This process iterates, with the improved model sent back to devices for further training, ultimately leading to a model better suited for diverse tasks and data.
Value propositions:
Democratization of AI: Federated learning reduces dependency on big, centralized datasets by enabling training on data controlled by several entities. This can enable people and smaller businesses to create and enhance AI models without jeopardizing the privacy of user data.
Privacy-Preserving AI Development: Federated fine-tuning solves privacy issues related to traditional AI development, which frequently includes sharing sensitive data, by storing data on user devices or in distributed storage. This may be quite important for sectors such as banking and healthcare.
Increased Model Generalisability: AI models that are trained on a larger variety of data from various sources tend to be more versatile and perform better in a variety of circumstances. This is especially helpful for jobs that need to be adjusted for different circumstances.
Collaborative Innovation: By allowing organizations to update models without revealing sensitive data, federated learning promotes collaboration between them. When competition or privacy laws limit data sharing, this might encourage innovation in certain sectors.
Last updated