- Tether’s CEO proposes that AI models should run on local devices to protect privacy.
- Locally executed AI allows for independent, secure, and offline access to AI capabilities.
- Tether is exploring integrating local AI models following a security breach at OpenAI.
Paolo Ardoino, CEO of the blockchain platform Tether, has recently emphasized the importance of localizing artificial intelligence (AI) models to protect user privacy and enhance data security. According to Ardoino, enabling AI models to run directly on devices such as smartphones and laptops, rather than relying on external servers, represents a significant shift towards safeguarding personal data and improving model resilience.
He proposed that AI models should have the capability to operate directly on users’ devices, such as smartphones and laptops. This approach not only secures personal data by keeping it on the device but also ensures that the AI models can function offline, enhancing user independence and resilience against potential cyber threats.
Ardoino highlighted the existing capabilities of modern devices, which he believes are sufficiently powerful to handle the demands of fine-tuning large language models (LLMs) with user-specific data. This would allow for personal enhancements to be stored locally, rather than on external servers, thereby bolstering privacy and security.
Expansion into AI and Recent Security Breaches
The move towards localized AI models is particularly relevant for Tether as the company expands its footprint into the AI sector. Ardoino mentioned that Tether is actively exploring how to incorporate these localized models into their AI solutions, especially in light of recent security vulnerabilities exposed in the industry.
The initiative comes as a response to a significant security breach at OpenAI, the developer behind ChatGPT. Earlier in 2023, a hacker gained access to OpenAI’s internal communications, compromising sensitive information about the company’s AI technologies. This incident has sparked a broader discussion about the need for enhanced security measures in AI development and deployment.
Privacy Concerns and Industry Reaction
Further complicating the landscape, it was revealed that conversations on macOS using ChatGPT were stored in plain-text files, raising serious concerns about user privacy. This issue, reportedly resolved, has led to increased scrutiny over how major tech companies handle user data.
The industry’s response has been varied, with some stakeholders advocating for a decentralized approach to AI development. This would challenge the current dominance of big tech companies and potentially lead to a fairer and more secure future for AI technology users.
As the AI field continues to evolve, the call for more robust privacy measures and the localization of AI models are likely to influence how new technologies are developed and implemented across the globe.