{"id":28748,"date":"2023-05-05T20:19:08","date_gmt":"2023-05-06T03:19:08","guid":{"rendered":"https:\/\/coinnetworknews.com\/tim-cook-says-apple-will-weave-ai-into-products-as-researchers-work-on-solving-bias\/"},"modified":"2023-05-05T20:19:08","modified_gmt":"2023-05-06T03:19:08","slug":"tim-cook-says-apple-will-weave-ai-into-products-as-researchers-work-on-solving-bias","status":"publish","type":"post","link":"https:\/\/coinnetworknews.com\/tim-cook-says-apple-will-weave-ai-into-products-as-researchers-work-on-solving-bias\/","title":{"rendered":"Tim Cook says Apple will weave AI into products as researchers work on solving bias"},"content":{"rendered":"

<\/p>\n

\n

CEO Tim Cook gave a rare, if guarded, glimpse into Apple\u2019s walled garden during the Q&A portion of a recent earnings call when asked his thoughts on generative artificial intelligence (AI) and where he \u201csees it going.\u201d\u00a0<\/p>\n

Cook refrained from revealing Apple\u2019s plans, stating upfront, \u201cWe don\u2019t comment on product roadmaps.\u201d However, he did intimate<\/a> that the company was interested in the space:<\/p>\n

\u201cI do think it\u2019s very important to be deliberate and thoughtful in how you approach these things. And there\u2019s a number of issues that need to be sorted. \u2026 But the potential is certainly very interesting.\u201d<\/p><\/blockquote>\n

The CEO later added the company views \u201cAI as huge\u201d and would \u201ccontinue weaving it in our products on a very thoughtful basis.\u201d<\/p>\n

Cook\u2019s comments on taking a \u201cdeliberate and thoughtful\u201d approach could explain the company\u2019s absence in the generative AI space. However, there are some indications that Apple is conducting its own research into related models.<\/p>\n

A research paper scheduled<\/a> to be published at the Interaction Design and Children conference this June details a novel system for combating bias in the development of machine learning datasets. <\/p>\n

Bias \u2014 the tendency for an AI model to make unfair or inaccurate predictions based on incorrect or incomplete data \u2014 is oft-cited<\/a> as one of the most pressing concerns for the safe and ethical development of generative AI models. <\/p>\n

\n

So glad OpenAI is keeping its bias in check. pic.twitter.com\/y4a7FUochR<\/a><\/p>\n

\u2014 Brooklin Nash (@realBrookNash) April 27, 2023<\/a><\/p><\/blockquote>\n

The paper, which can currently be read<\/a> in preprint, details a system by which multiple users would contribute to developing an AI system\u2019s dataset with equal input. <\/p>\n

Status quo generative AI development doesn\u2019t add in human feedback until later stages, when models have typically already gained training bias. <\/p>\n

The new Apple research integrates human feedback at the very early stages of model development in order to essentially democratize the data selection process. The result, according to the researchers, is a system that employs a \u201chands-on, collaborative approach to introducing strategies for creating balanced datasets.\u201d <\/p>\n

Related: <\/em>AI\u2019s black box problem: Challenges and solutions for a transparent future<\/em><\/a><\/strong><\/p>\n

It bears mention that this research study was designed as an educational paradigm to encourage novice interest in machine learning development. <\/p>\n

It could prove difficult to scale the techniques described in the paper for use in training large language models (LLMs) such as ChatGPT and Google Bard. However, the research demonstrates an alternative approach to combating bias. <\/p>\n

Ultimately, the creation of an LLM without unwanted bias could represent a landmark moment on the path to developing human-level AI systems. <\/p>\n

Such systems stand to disrupt every aspect of the technology sector, especially the worlds of fintech, cryptocurrency trading<\/a>\u00a0and blockchain. Unbiased stock and crypto trading bots capable of human-level reasoning, for example, could shake up the global financial market by democratizing high-level trading knowledge. <\/p>\n

Furthermore, demonstrating an unbiased LLM could go a long way toward satisfying government safety and ethical concerns<\/a> for the generative AI industry. <\/p>\n

This is especially noteworthy for Apple, as any generative AI product it develops or chooses to support would stand to benefit from the iPhone\u2019s integrated AI chipset and its 1.5 billion user footprint.<\/p>\n