I hadn’t really even considered that apple wouldn’t be working on their own LLM. Seems like everyone is making their own LLM these days.
They possibly are (or at least have people doing research), it’s just not very good (yet?) https://aimodels.substack.com/p/apple-is-working-on-multimodal-ai
Remember the early days of Apple Maps?
Remember the early days of Apple Maps?
If that’s an indication, Apple’s AI offerings will someday be as good or better than Google’s. Cause Apple Maps is pretty great these days, but was absolute garbage when they rolled it out.
Apple is working on models, but they seem to be focusing on ones that use tens of gigabytes of RAM, compared to tens of terabytes.
I wouldn’t be surprised Apple ships an “iPhone Pro” with 32GB of RAM dedicated to AI models. You can do a lot of really useful stuff with a model like that… but it can’t compete with GPT4 or Gemini today - and those are moving targets. OpenAI/Google will have even better models (likely using even more RAM) by the time Apple enters this space.
A split system, where some processing happens on device and some in the cloud, could work really well. For example analyse every email/message/call a user has ever sent/received with the local model, but if the user asks how many teeth a crocodile has… you send that one to the cloud.
Tbf, Google has versions of Gemini that will run locally on phones too, and their open source Gemini models run on 16GB of ram or so.
What does apple do with all that money???
Good grief.
How about they colabortate on a messegaing standard instead?
This gives google all the data