Anthropic, the US-based AI startup, on Tuesday announced a groundbreaking feature that allows computers to be used in a more human-like manner. This capability is designed to assist developers in automating repetitive tasks, managing cursor movements, clicking buttons, and typing text, enhancing the functionality of AI agents.
How Will The Feature Work?
Currently in public beta, this feature is integrated into Anthropic's latest language model, Claude 3.5 Sonnet, which was also updated this week alongside another model, Claude 3.5 Haiku. As reported by TechTarget, The new technology empowers developers to instruct Claude to interact directly with computer interfaces, enabling a range of applications from task automation to software development and exploratory research.
By utilising an API developed by Anthropic, programmers can facilitate automation for tasks that typically require human intervention. This move aligns with the growing trend in the generative AI sector towards employing AI agents, a shift further highlighted by Salesforce’s recent introduction of large action models and its Agentforce platform.
Anthropic acknowledged that the computer interaction feature is still in development, with challenges in performing tasks like scrolling, dragging, and zooming — actions that humans typically execute with ease. Additionally, the Claude 3.5 Haiku model, which excels in coding tasks, is set to be released later this month across various platforms, including Anthropic's API, Amazon Bedrock, and Google Cloud's Vertex AI.
Anthropic noted that several enterprises, including Asana, Canva, and DoorDash, have already utilised this computer interaction feature before its public release, suggesting a strong interest in the potential for automation.
While the new feature appears to be a significant advancement, it shares similarities with existing robotic process automation (RPA) and business process automation (BPA) tools, which have long been used in the business process management sector.
What Are The Risks?
Despite the promising opportunities, the new tool comes with certain risks. As reported by TechTarget, experts cautioned against premature adoption, expressing concerns about security, safety, and responsible usage. The feature could potentially allow malicious actors to exploit system vulnerabilities by guiding AI systems into computer networks.