Nope. Some people signed a letter, not a resignation. I wonder if those employees are that interested in OpenAI’s ideas if everything it takes to quit is another position at … hold your ground … Microsoft.
Nope. Some people signed a letter, not a resignation. I wonder if those employees are that interested in OpenAI’s ideas if everything it takes to quit is another position at … hold your ground … Microsoft.
He has no clue how much capital it takes to achieve what they’re trying to do.
Jesus, maybe you can enlighten him. Or maybe you can realize that money isn’t any issue with OpenAI at the moment. Even Altman said this months ago. It’s not about money, it’s about how to approach it. And OpenAI’s vision is not that of a MS capitalist approach in the long term. Even though they partnered with a capitalist company. But under certain conditions.
With regard to the rest, Russia isn’t a race, Ilya was born there, …
Fun fact: There are no human races to begin with (but that’s what Racist theories were all about). Just the U.S. somehow uses this term (and some other country in the world I forgot about). “Racism” refers to discrimination based on ethnicity, which also includes things like a common nation of origin. So you made a racist statement.
Let’s thank “God” that a redditor isn’t deciding on AI alignment and safety, just so he can use an “uncensored” model to jerk off.
Seems to me that Ilya Sutskever must be some kind of nut job idealist / egoist.
Funny you say this, since the reason they fired Altman were moves by Altman that were not in the interest of OpenAI, but rather ego moves that threaten the security of the AGI development.
You can stick your racist and stereotypical comments about a person being originally from Russia in your back by the way. The decision came from the whole board and Ilya is Canadian, raised also in Israel.
Recommending a model that produces EOS tokens randomly, feels off to me. The OpenHermes 2 Mistral Model sucks in my opinion. It seems to have serious flaws.
The bad thing is, just because Andrew Ng states this, doesn’t make it true or the possibility of dangers less relevant. There are people outside of big businesses like Hinton who also warn, even though he is not part of “big companies” anymore.
Also, what is all of this about? In the end there are multiple scenarios in which ways AI can harm society. It probably won’t be about Terminator rising. On the other hand precausions revolve around the fact that we actually don’t really know, because this technology is so new.
I also don’t think that “big companies” like OpenAI even need to shut down smaller businesses, because - as Sam Altman stated - incoming money really isn’t any issue for them at all. They are drowning in money.
While there are certainly people who only care for money and other kinds of status symbols, I still believe that many people working within those companies actually try to be truthful about their work as individuals.
Really nice explanation, thank you!
So if I only want min_p sampling of 0.05 to work with llama.cpp for example, which values should other sampling parameters like top_k (0?), top_p (1.0?) and temperature (1.0?) use, so they have no influence?