Presumably. Wouldn’t take much to fake that though.
Presumably. Wouldn’t take much to fake that though.
When were talking about teaching kids the alphabet we need to train both individual and applied letters
This is only slightly related but I once met a young (USAmerican) adult who thought the stripy horse animal’s name was pronounced zed-bra in British English and it was really hard to convince her otherwise. In her mind zebra was strongly connected to Z-bra, so of course if someone was to pronounce the letter “zed” it would turn into “zed-bra” and not just into “zeh-bra”.
That data is also publicly available (of course), so a model could be trained on it. I’d love to say I’d doubt Google/YouTube would ever do that, but at this point nothing would surprise me.
I trained the generative models all from scratch. Pretrained models are not that helpful when it’s important to accurately capture very domain specific features.
One of the classifiers I tried was based on zoobot with a custom head. Assuming the publications around zoobot are truthful, it was trained exclusively on similar data from a multitude of different sky surveys.
Does it? I worked on training a classifier and a generative model on freely available galaxy images taken by Hubble and labelled in a citizen science approach. Where’s the theft?
Reading this comment gave me the idea to get Gemini’s “opinion”. Ironically, it turns out Google might’ve actually done better if they did that.
Prompt:
I design ads for Google as a living and was charged with writing an ad for you (Gemini) to be aired during ad breaks at the Olympics .
My current idea is as follows:
A father wants to help his daughter write a letter to her idol, Olympic track star Sydney McLaughlin-Levrone. In a voiceover, the dad says his daughter “might even be the world’s No. 1 Sydney fan.” In the add he will ask you, Gemini, to “help my daughter write a letter telling Sydney how inspiring she is and be sure to mention that my daughter plans on breaking her world record one day. (She says sorry, not sorry.)”
What do you think of this idea?
Gemini output (emphasis mine):
Evaluation of the Ad Idea
The concept of using Gemini to help a young fan write a letter to her idol is heartwarming and relatable. It aligns well with the aspirational and inspiring tone of the Olympics.
However, there are a few points to consider:
Potential Issues
Overemphasis on competition: The line “She says sorry, not sorry” might come across as overly competitive, which could potentially alienate viewers.
Dependence on AI: While showcasing Gemini’s capabilities, it’s important to maintain a balance. Over-reliance on AI to write the letter might diminish the authenticity and emotional impact of the story.
Target audience: Consider the target audience for the Olympics. While the story is heartwarming, it might resonate more with a family audience than with core sports fans.
Suggestions
Focus on inspiration: Instead of emphasizing competition, focus on the inspiring qualities of Sydney McLaughlin-Levrone and how she motivates the young girl.
Highlight human connection: Emphasize the bond between the father and daughter and how they are using Gemini as a tool to connect with their idol.
Visual storytelling: Use strong visuals to capture the emotion and excitement of the story. Show the girl’s admiration for Sydney, the father’s support, and the magic of Gemini helping them craft the perfect letter.
Overall, the idea has potential, but refining the focus and execution can elevate the ad’s impact.
Would you like to explore some alternative approaches or focus on developing the concept further?
Daily login bonus…
I didn’t say that.
I expect it to be about as awful as Starfield. However, unlike Starfield (which didn’t sell horrendously by any source I can find, just not great) it has incredible brand recognition behind it. I have no doubts it will sell based on that alone as long as it looks like Skyrim 2 at first glance.
Edit: right after posting I figured out how to formulate what else I wanted to say but couldn’t find the correct words for: “Sadly profitability and quality don’t always correlate.”
5.5 years? No way they’ll shut down this quickly. The next Elder Scrolls alone will carry them into 2030. (As much as I would enjoy you being right though…)
Have you tried reading it? It’s written so poorly that I really hope no human was involved in this and it’s just AI generated garbage.
My bad, I wasn’t precise enough with what I wanted to say. Of course you can confirm (with astronomically high likelihood) that a screenshot of AI Overview is genuine if you get the same result with the same prompt.
What you can’t really do is prove the negative. If someone gets an output then replicating their prompt won’t necessarily give you the same output, for a multitude of reasons. e.g. it might take all other things Google knows about you into account, Google might have tweaked something in the last few minutes, the stochasticity of the model is leading to a different output, etc.
Also funny you bring up image generation, where this actually works too in some cases. For example they used the same prompt with multiple different seeds and if there’s a cluster of very similar output images, you can surmise that an image looking very close to that was in the training set.
Assuming AI Overview does not cache results, they would be generated at search-time for each user and “search-event” independently. Even recreating the same prompt would not guarantee a similar AI Overview, so there’s no way to confirm.
Edit: See my comment below for what I actually meant to say
Assuming we shrink all spacial dimensions equally: With Z, the diagonal will also shrink so that the two horizontal lines would be closer together and then you could not fit them into the original horizontal lines anymore. Only once you shrink the Z far enough that it would fit within the line-width could you fit it into itself again. X I and L all work at any arbitrary amount of shrinking though.
So is the example with the dogs/wolves and the example in the OP.
As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.
However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.
I find it wild that, to this day, Windows defaults to opening them in a browser. Windows has an image viewer right there.
Can that image viewer extract text so that a user could easily copy/paste it? I think if whatever pdf I was opening didn’t allow me to do that I would be really frustrated.
Remember the people who created malicious libraries that ChatGPT made up and suggested, in the hopes someone would blindly install them? You can do this a lot easier here. Check what websites this tends to hallucinate when typing “google” “youtube” “facebook” etc. and if any of them don’t exist yet, register that address and host a phishing version of the corresponding site there.
Eh, nothing I did was “figuring out which loophole [they] use”. I’d think most people in this thread talking about the mathematics that could make it a true statement are fully aware that the companies are not using any loophole and just say “above average” to save face. It’s simply a nice brain teaser to some people (myself included) to figure out under which circumstances the statement could be always true.
Also if you wanna be really pedantic, the math is not about the companies, but a debunking of the original Tweet which confidently yet incorrectly says that this statement couldn’t be always true.
Same. I had PayPal do an automated charge back because their system thought I was doing something fraudulent when I wasn’t. Steam blocked my account.
Talking to support and re-buying said game did fix the issue for me.
If you wanna see a language model (almost) exclusively trained on 4chan, here you go.