https://support.google.com/pixelphone/thread/242705137/google-pixel-update-november-2023?hl=en
Hey there everyone. I think the Photon project has matured enough to the point where I feel ready to replace the default lemmy frontend with it. Since this instance serves roughly 1000 people now, I figured this was worth holding a vote on!
Please check out Photon as currently hosted at https://p.lemdro.id.
If you support changing the default frontend to Photon, upvote my comment on this post. If you don't support it, downvote that same comment.
Thanks!
https://futurism.com/the-byte/stability-ai-stable-diffusion-chaos
From fundraising more than $100 million to hemorrhaging top talent the firm that makes Stable Diffusion has had a heck of a year.
This is a very old instance, with 0 activity. Do you still look at this ever?
Earlier today, I identified the root cause of an issue causing annoying 502 errors intermittently. If you've ever had an action infinitely load until you refreshed the page, that is this issue. I deployed a fix, and am slowly scaling it down to stress test it. If you encounter an infinite loading occurrence or an HTTP 502 error please let me know!
UPDATE: Stress testing complete. Theoretically we should be equipped to handle another 5k users without any intervention from me
Hello folks! I am migrating the image backend to an S3-compatible provider for cost and reliability reasons. During this time, thumbnails and other images hosted here will be borked, but the rest of Lemdro.id will remain online. Thank you for your patience!
UPDATE: Image migration is gonna take a hot minute. Should be done in around 6 hours, I'll get it fully fixed up in around 7-8 hours when I wake up (~08:30 PDT)
UPDATE 2: It failed, yay! Alright, fine. I turned the image proxying back on. I am migrating to S3 in the background and will switch over when it is done. Any images uploaded in the next 8 hours or so may end up being lost.
UPDATE 3: Migration complete. Will be rolling out the update to S3-backed image storage in around 6 hours (~6pm PDT)
UPDATE 4: Object storage backend deployed! Thanks for your patience folks.
I'm sure you all have noticed the latency problems on this instance. Stage 1 of my 4 stage scaling roadmap is taking place tonight as I migrate the database to physically run closer to the machines running lemmy.
I will do a more detailed write-up on this later, but the gist is that each db operation required a new connection from lemmy, and that means a brand new SSL handshake since the db is managed elsewhere. Pooling would solve this, but lemmy does not handle a properly configured pg bouncer correctly in my testing. So the solution is move the database closer and within the private network to avoid SSL handshakes altogether.
TL;DR instance gonna go brrrr, downtime starting at 10:30pm pacific time tonight, should be done by 11:30pm
@cole
@lemdro.id