General Programming Discussion

!programming

@lemmy.ml
Create post
Make puppeteer wait for a page to fully load

Make puppeteer wait for a page to fully load

Taking accurate screenshots with Puppeteer has been a real pain, especially with pages that don’t fully load when the standard waitUntil: load fires. A real pain. Some sites, particularly SPAs built with Angular, React, or Vue, end up half-loaded, leaving me with screenshots where parts of the page are just blank. Peachy, just peachy.

I've had the same issue with waitUntil: domcontentloaded, but that one was kind of expected. The problem is that the page load event fires too early, and for pages relying on JavaScript to load images or other resources, this means the screenshot captures a half-baked page. Useless, obviously.

After some digging accompanied by a certain type of language (the beep type), I did find a few workarounds. For example, you can use Puppeteer to wait for specific DOM elements to appear or disappear. Another approach is to wait for the network to be idle for a certain time. But what really helped was finding a custom function that waits for the DOM updates to settle (source). It’s the closest to a solution for getting fully loaded screenshots across different types of websites, at least from what I was able to find. Hope it will help anyone who struggles with this issue.

Visual Data Structures Cheat-Sheet

Visual Data Structures Cheat-Sheet

Open link in next tab

https://photonlines.substack.com/p/visual-data-structures-cheat-sheet

Slingcode is a personal computing platform in a single html file.

Slingcode is a personal computing platform in a single html file.

Open link in next tab

Slingcode

https://slingcode.net/

Personal computing platform. Make, run, and share web apps.

Slingcode
The Generation Ship Model of Software Development

The Generation Ship Model of Software Development

Open link in next tab

The Generation Ship Model of Software Development

https://medium.com/@wm/the-generation-ship-model-of-software-development-5ef89a74854b

You live with your family in Area Delta-44 of floor 212. Floors 140 through 266 are your world. Your life is walls and corridors, air ducts and stairwells, and—if you’re lucky—elevators. You live…

The Generation Ship Model of Software Development
Garbage collection and closures

Garbage collection and closures

Open link in next tab

Garbage collection and closures

https://jakearchibald.com/2024/garbage-collection-and-closures/

GC within a function doesn't work how I expected

Garbage collection and closures
LLMs as interactive rubber ducks (or Q&A trainers)

LLMs as interactive rubber ducks (or Q&A trainers)

I initially wrote this as a response to this joke post, but I think it deserves a separate post.

As a software engineer, I am deeply familiar with the concept of rubber duck debugging. It's fascinating how "just" (re-)phrasing a problem can open up path to a solution or shed light on own misconceptions or confusions. (As and aside, I find that among other things that have similar effect is writing commit messages, and also re-reading own code under a different "lighting": for instance, after I finish a branch and push it to GitLab, I will sometimes immediately go and review the code (or just the diff) in GitLab (as opposed to my terminal or editor) and sometimes realize new things.)

But another thing I've been realizing for some time is that these "a-ha" moments are always mixed feelings. Sure it's great I've been able to find the solution but it also feels like bit of a downer. I suspect that while crafting the question, I've been subconsciously also looking forward for the social interaction coming from asking that question. Suddenly belonging to a group of engineers having a crack at the problem.

The thing is: I don't get that with ChatGPT. I don't get that since there's was not going to be any social interaction to begin with.

With ChatGPT, I can do the rubber duck debugging thing without the sad part.

If no rubber duck debugging happens, and ChatGPT answers my question, then that's obvious, can move on.

If no rubber duck debugging happens, and ChatGPT fails to answer my question, then by the time at least I got some clarity about the problem which I can re-use to phrase my question with an actual community of peers, be it IRC channel, a Discord server or our team Slack channel.


So I'm wondering, do other people tend to use LLMs as these sort of interactive rubber ducks?

And as a bit of a stretch of this idea---could LLM be thought of as a tool to practice asking question, prior to actually asking real people?


PS: I should mention that I'm also not a native English speaker (which I guess is probably obvious by now by my writing) so part of my "learning asking question" is also learning it specifically in English.

Are SW design patterns guilty until proven otherwise?

Are SW design patterns guilty until proven otherwise?

I started writing this as an answer to someone on some discord, but it would not fit the channel topic, but I'd still love to see people's views on this.

So I'll quote the comment but just as a primer:

The safest pattern to use is to not use any pattern at all and write the most straight forward code. Apply patterns only when the simplest code is actually causing real problems.

First and foremost: Many paths to hell are paved with design patterns applied willy-nilly. (A funny aside: OO community seems to be more active and organized in describing them (and often not warning strongly enough about dangers of inheritance, the true lord of the pattern rings), which leads to the lower-level, simpler patterns being underrepresented.)

But, the other extreme is not without issues, by far.

I've seen too many FastAPI endpoints talking to db like there's no tomorrow. That is definitely "straight forward" approach but the first problem is already there: it's pretty much untestable, and soon enough everyone is coupling to random DB columns (and making random assumptions about their content, usually based on "well let's see who writes what there" analysis) which makes it hard to change without playing a whack-a-bug.

And what? Our initial DB design was not future proof? Tough luck changing it now. So new endpoints will actually be trying to make up for the obsolete schema, using pandas everywhere to do what SQL or some storage layer (perhaps with some unit-of-work pattern) should be doing -- and further cementing in the obsolete design. Eventually it's close to impossible to know who writes/expects what, so now everyone better be defensive, adding even more cruft (and space for bugs).

My point is, I guess, that by the time when there are identifiable "real problems" to be solved by pattern, it's far too late.

Look, in general, postponing a decision to have more information can be a great strategy. But that depends on the quality of information you get by postponing. If that extra information is going to be just new features you added in the meantime, that is going to be heavily biased by the amount of defensive / making-up-for-bad-db junk that you forced yourself to keep adding. It's not necessarily going to be easier to see the right pattern.

So the tricky part is, which patterns are actually strong enough yet not necessarily obtrusive, so that you can start applying them early on? That's a million dollar question.

I don't think "straight forward" gets you towards answering that question. (Well, to be fair, I'm sure people have made $1M with "straight forward code", so that's that, but is that a good bet?)

(By the way, real world actually has a nice pattern specifically for getting out of that hole, and it's called "your competitor moving faster & being cheaper than you" so in a healthy market the problem should solve itself eventually...)


So what are your ideas? Do you have design patterns / disciplines that you tend to apply generally, with new projects?

I'm not looking for actual patterns (although it's fine to suggest your favorites, or link to resources), I'm mainly interested in what do people think about patterns in general, and how to apply them during the lifetime of the project.

"JavaScript Haikus - Tiny Code Adventures" by Frank Force

"JavaScript Haikus - Tiny Code Adventures" by Frank Force

Open link in next tab

- YouTube

https://www.youtube.com/watch?v=CagnRwPkw_M

Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.

2024 Stack Overflow Developer Survey

2024 Stack Overflow Developer Survey

Open link in next tab

2024 Stack Overflow Developer Survey

https://survey.stackoverflow.co/2024/

In May 2024, over 65,000 developers responded to our annual survey about coding, the technologies and tools they use and want to learn, AI, and developer experience at work. Check out the results and see what's new for Stack Overflow users.

Software engineers are not (and should not be) technicians

Software engineers are not (and should not be) technicians

Open link in next tab

Software engineers are not (and should not be) technicians

https://www.haskellforall.com/2024/07/software-engineers-are-not-and-should.html

Software engineers are not (and should not be) technicians I don’t actually think predictability is a goo...