@IHeartBadCode
@kbin.socialQuick things to note.
One, yes, some models were trained on CSAM. In AI you'll have checkpoints in a model. As a model learns new things, you have a new checkpoint. SD1.5 was the base model used in this. SD1.5 itself was not trained on any CSAM, but people have giving additional training to SD1.5 to create new checkpoints that have CSAM baked in. Likely, this is what this person was using.
Two, yes, you can get something out of a model that was never in the model to begin with. It's complicated, but a way to think about it is, a program draws raw pixels to the screen. Your GPU applies some math to smooth that out. That math adds additional information that the program never distinctly pushed to your screen.
Models have tensors which long story short, is a way to express an average way pixels should land to arrive at some object. This is why you see six fingered people in AI art. There wasn't any six fingered person fed into the model, what you are seeing the averaging of weights pushing pixels between two different relationships for the word "hand". That averaging is adding new information in the expression of an additional finger.
I won't deep dive into the maths of it. But there's ways to coax new ways to average weights to arrive at new outcomes. The training part is what tells the relationship between A and C to be B'. But if we wanted D' as the outcome, we could retrain the model to have C and E averaging OR we could use things call LoRAs to change the low order ranking of B' to D'. This doesn't require us to retrain the model, we are just providing guidance on ways to average things that the model has already seen. Retraining on C and E to D' is the part old models and checkpoints used to go and that requires a lot of images to retrain that. Taking the outcome B' and putting a thumb on the scale to put it to D' is an easier route, that just requires a generalized teaching of how to skew the weights and is much easier.
I know this is massively summarizing things and yeah I get it, it's a bit hard to conceptualize how we can go from something like MSAA to generating CSAM. And yeah, I'm skipping over a lot of steps here. But at the end of the day, those tensors are just numbers that tell the program how to push pixels around given a word. You can maths those numbers to give results that the numbers weren't originally arranged to do in the first place. AI models are not databases, they aren't recalling pixel for pixel images they've seen before, they're averaging out averages of averages.
I think this case will be slam dunk because highly likely this person's model was an SD1.5 checkpoint that was trained on very bad things. But with the advent of being able to change how averages themselves and not the source tensors in the model work, you can teach new ways for a model to average weights to obtain results the model didn't originally have, without any kind of source material to train the model. It's like the difference between Spatial antialiasing and MSAA.
Okay for anyone who might be confused on how a model that's not been trained on something can come up with something it wasn't trained for, a rough example of this is antialiasing.
In the simplest of terms antialiasing looks at a vector over a particular grid, sees what percentage it is covering, and then applies that percentage to to shade the image and reduce the jaggies.
There's no information to do this in the vector itself, it's the math that is what is giving the extra information. We're creating information from a source that did not originally have it. Now, yeah this is really simple approach and it might have you go "well technically we didn't create any new information".
At the end of the day, a tensor is a bunch of numbers that give weights to how pixels should arrange themselves on the canvas. We have weights that show us how to fall pixels to an adult. We have weights that show us how to fall pixels to children. We have weights that show us how to fall pixels to a nude adult. There's ways to adapt the lower order ranking of weights to find new approximations. I mean, that's literally what LoRAs do. I mean that's literally their name, Low-Rank Adaptation. As you train on this new novel approach, you can wrap that into a textual inversion. That's what that does, it allows an ontological approach to particular weights within a model.
Another way to think of this. Six finger people in AI art. I assure you that no model was fed six fingered subjects, so where do they come from? The answer is that the six finger person is a complex "averaging" of the tensors that make up the model's weights. We're getting new information where there originally was none.
We have to remember that these models ARE NOT databases. They are just multidimensional weights that tell pixels from a random seed where to go to in the next step in the diffusion process. If you text2image "hand" then there's a set of weights that push pixels around to form the average value of a hand. What it settles into could be a four fingered hand, five fingers, or six fingers, depends on the seed and how hard the diffuser should follow the guidance scale for that particular prompt's weight. But it's distinctly not recalling pixel for pixel some image it has seen earlier. It just has a bunch of averages of where pixels should go if someone says hand.
You can generate something new from the average of complex tensors. You can put your thumb on the scale for some of those weights, give new maths to find new averages, and then when it's getting close to the target you're after use a textual inversion to give a label to this "new" average you've discovered in the weights.
Antialiasing doesn't feel like new information is being added, but it is. That's how we can take the actual pixels being pushed out by a program and turn it into a smooth line that the program did not distinctly produce. I get that it feels like a stretch to go from antialiasing to generating completely novel information. But it's just numbers driving where pixels get moved to, it's maths, there's not really a lot of magic in these things. And given enough energy, anyone can push numbers to do things they weren't supposed to do in the first place.
The way models that come from folks who need their models to be on the up and up is to ensure that particular averages don't happen. Like say we want to avoid outcome B', but you can average A and C to arrive at B'. Then what you need is to add a negative weight to the formula. This is basically training A and C to average to something like R' that's really far from the point that we want to avoid. But like any number, if we know the outcome is R' for an average of A and C, we can add low rank weights that don't require new layers within the model. We can just say, anything with R' needs -P' weight, now because of averages we could land on C' but we could also land on A' or B' our target. We don't need to recalculate the approximation of the weights that A and C give R' within the model.
For instance, this includes minerals for battery and other components to produce EVs and wind turbines – such as iron, lithium, and zinc
I found nothing within the IEA's announcement that indicates a shortage of those three elements. Iron is like the fourth most abundant thing on the planet.
In fact, this story literally reports this whole thing all wrong. It's not that there's a shortage, it's that the demand for renewables is vastly larger than what we're mining for. Which "duh" we knew this already. The thing this report does is quantify it.
That said, the "human rights abuses" isn't the IEA report. That comes from the Business and Human Rights Resource Centre (BHRRC).
Specifically, the BHRRC has tracked these for seven key minerals: bauxite, cobalt, copper, lithium, manganese, nickel and zinc. Companies and countries need these for renewable energy technology, and electrification of transport.
These aren't just limited to the renewable industry. Copper specifically, you've got a lot of it in your walls and in the device that you are reading this comment on. We have always had issues with copper and it's whack-a-mole for solutions to this. I'm not dismissing BHRRC's claim here, it's completely valid, but it's valid if we do or do not do renewables. Either way, we still have to tackle this problem. EVs or not.
Of course, some companies were particularly complicit. Notably, BHRRC found that ten companies were associated with more than 50% of all allegations tracked since 2010
And these are the usual suspects who routinely look the other way in human right's abuses. China, Mexico, Canada, and Switzerland this is the list of folks who drive a lot of the human rights abuses, it's how it has been for quite some time now. That's not to be dismissive to the other folks out there (because I know everyone is just biting to blame the United States somehow) but these four are usually getting their hand smacked. Now to be fair, it's really only China and Switzerland that usually does not care one way or the other. Canada and Mexico are just the folks the US convinced to take the fall for their particular appetite.
For example, Tanzania is extracting manganese and graphite. However, he pointed out that it is producing none of the higher-value green tech items like electric cars or batteries that need these minerals
Third Congo war incoming. But yeah, seriously, imperialism might have officially ended after World War II, but western nations routinely do this kind of economic fuckening, because "hey at least they get to self-govern". It's what first world nations tell themselves to sleep better for what they do.
Avan also highlighted the IEA’s advice that companies and countries should shift emphasis to mineral recycling to meet the growing demand.
This really should have happened yesterday. But if they would do something today, that would actually be proactive about the situation. Of course, many first world nations when they see a problem respond with "come back when it's a catastrophe."
OVERALL This article is attempting to highlight that recycling is a very doable thing if governments actually invested in the infrastructure to do so and that if we actually recycled things, we could literally save ⅓ the overall cost for renewables. It's just long term economic sense to recycle. But of course, that's not short term economic sense. And so with shortages to meet demand on the horizon, new production is going to be demanded and that will in turn cause human rights violations.
They really worded the whole thing oddly and used the word shortage, like we're running out, when they meant shortage as in "we can't keep up without new production". They got the right idea here, I just maybe would have worded all of it a bit differently.
Oh look Nintendo doing more shitty things. Mild shock
"We literally lack the ability to hire programmers that can write a decent networking stack, but boy can we litigate."
This issue is a bit more complex than just "hospitals shouldn't be for profit". Not to dismiss that's a big driver here, but there's a lot more going on.
Rural communities tend to have lower insurance coverage, that means for the people who do show up, their debt will eventually go into collections or be completely written off as a loss. Rural communities have vastly less access to better insurance and many just completely forgo insurance altogether.
Additionally, rural communities have a tendency to enter a death spiral between visits and costs. The number of people showing up at the hospital is low, but for the ones that do they show up with incredibly expensive conditions.
A lot of the financing and extended lines of revenue for rural hospitals is tied into the expanded Medicaid offerings under the Affordable Care Act (ACA). There's clear demonstration that states that have opted to not expand Medicaid are the ones overwhelmingly facing hospital closures. States that have expanded still face issues, but states that have not are facing worse outcomes for rural hospitals.
Finally, costs for healthcare have steadily increased at rates that outpace pretty much every program out there. Pharmaceutical companies are ever shifting costs of materials and medication making long term planning difficult. These companies cite new regulation requiring a remixing of their costs of products. Basically, if some state mandates $30 insulin, that makes cancer treatment go up some, massive percentage. So a requirement to reduce cost to consumer in one area induces an increase in cost somewhere else.
And no just telling hospitals they can't drive a profit won't fix the issue. The doctors, insurance, coverage, politics over the ACA, the education of those doctors, the supply chain of the hospitals, and the production of medical supplies all have played a role in this. There's just thousands of things that have to change or we're going to see more of this.
The entire thing is predicated on a completely unsustainable economic model. This system we have is completely unsustainable. It was never sustainable, it's just that the losses had to eventually add up enough to run the thing into the ground. And this isn't limited to just Red States, it's just that the Red States are the ones least prepared for this slowly building problem. This issue is coming for everywhere. There's no hospital that's going to survive this if we do not fundamentally change the system upon which our healthcare is built on.
There are just too many flaws to band-aid here. We have to have a massive overhaul of our system or people are literally going to die. The problem is, that we can't tell who is going to be at the steering wheel to direct those changes. There has to be a shared vision between the two major political parties that can endure for decades to ensure that whatever new systems is made, is actually built. If the two parties that run our government can never agree, hang it up folks, we're done here. I know some people are going to take that as an invite to bash the other party, but at the end of the day, we either all work together or we don't.
We have to have some sort of change to our system like yesterday. It needs to be a massive change that take effect at ALL of the layers within the healthcare system. We cannot keep making minor incremental changes, it's just plugging one hole in the dam only for another one to spring forth.
I looked at that picture that they had up for that "100,000" headcount. All I know is that there's a lot of people on Trump's team and in Wildwood, NJ who are extremely bad at estimating headcount. That picture of the crowd at it's largest is (being absolutely generous here) is roughly 20,000 tops. 100,000 people is a massive amount of people, like it is a lot of people. There are zero ways there's 100,000 in that picture. When you hit 100,000 people, you know it, because it's an ungodly amount of people.
Seinfeld has publicly supported Israel following the 7 October Hamas attack, and traveled to a kibbutz in December to meet with hostages’ families
In case you're wondering what the argument is. You should still read the story though.