Just a quick check, is this location based or something, or maybe the meme was very old? Not to say that these things don’t happen anymore, but I can access this one specifically just fine.
Just a quick check, is this location based or something, or maybe the meme was very old? Not to say that these things don’t happen anymore, but I can access this one specifically just fine.
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
what are the other alternatives to ENV that are more preferred in terms of security?
yeah I guess maybe the formatting and the verbosity seems a bit annoying? Wonder what the alternatives solution could be to better engage people from mastodon, which is what this bot is trying to address.
edit: just to be clear, I’m not affiliated with the bot or its creator. This is just my observation from multiple posts I see this bot comments on.
I’m curious, why is this bot currently being downvoted for almost every comment it makes?
Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.
Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.
Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?
That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.
I think many have also been wondering about version control of legislation/law documents for some time as well. But I never understand why it’s not realized yet.
Here are some options:
Thanks for Floccus suggestions. It says it syncs over Nextcloud Bookmarks, does that mean you wouldn’t need a dedicated app except for Nextcloud?
I’m not entire sure what you mean by “printable reports”. Would you maybe want to post an example sketch?
Anyway, have you considered writing the variables to Latex maybe, then render that to PDF?
that looks cool! Do commenters need a github account to do that though?
thanks, and that’s great that this can be used for other static pages as well.
stupid general question: in the “install it yourself” guide, they say that this needs to be run on a VPS, for example with DigitalOcean. I’m thinking of deploying on fly.io, which I understand is like an alternative for Heroku. Is there a conceptual difference between these types of solutions (DigitalOcean vs fly.io for example) that might affect hosting?
Cool, thanks for the suggestions!
Never heard of webmentions but I’ve heard some people have integrated mastodon their Jekyll pages, I wonder if that’s the same thing.
I’ve heard that it’s not very privacy respecting, is it right?
Wonder how the survey was sent out and whether that affected sampling.
Regardless, with -3-4k responses, that’s disappointing, if not concerning.
I only have a more personal sense for Lemmy. Do you have a source for Lemmy gender diversity?
Anyway, what do you think are the underlying issues? And what would be some suggestions to the community to address them?