@tetris11
@lemmy.mlFair enough, I was basing my opinion on what some of the FAANG companies were doing to get rid of veteran staff by giving them the WFH ultimatum.
(Ah, the joyful tantrum). Educate yourself on how a simple JPEG works and exactly how little features are needed to produce an image that is almost indistinguishable from the source.
after employees with decades of experience left the company for remote work jobs.
Corporate still won. Those were the most expensive employees, and companies are proving time and time again that they just want output and not quality.
The median would be interesting. Just in case there's one guy out there stocking a 12 footer.
(nice ad hominem) Christ. When you reduce a high dimensional object into an embedded space, yes you keep only the first N features, but those N features are the most variable, and the loadings they contain can be used to map back to (a very good) approximation of the source images. It's akin to reverse engineering a very lossy compression to something that (very strongly) resembles the source image (otherwise feature extraction wouldn't be useful), and it's entirely doable.
(thanks for the insult, stay classy) so the network training stage was pulled out of thin air then? Huh, I didn't know these models could self-bootstrap themselves out of nothing.
I guess inverting models to do a tracing attack is impossible. Huh.