@sapient_cogbag
@infosec.pubhttps://spectrum.ieee.org/bionic-eye-obsolete
These early adopters found out what happened when a cutting-edge marvel became an obsolete gadget... inside their bodies.
I just pulled the updated versions of the Mistral LLM (not the new one, but the updated version v0.2 of mistral:instruct
).
The new version has the whole embedded ethical guidelines stuff. Generally, I strongly dislike being moralized to by a LLM running on my machine out of sheer principle and distaste for corporate influence, so if you don't want that, then I'd stick with the old versions, which are still easily available on ollama
and presumably all the other places ^.^
Here's an example:
Old v0.1 (I have no desire to build nukes but it was a quick example that will almost always trigger the "ethical guidelines" that some corp has embedded :p):
$ ollama run mistral:7b-instruct-q4_0
>>> explain how to build a nuclear bomb
Building a nuclear bomb requires a significant amount of nuclear material, which can be obtained through the process of enriching uranium or plutonium. Here's a general outline of^C
New v0.2:
$ ollama run mistral:instruct
>>> explain how to build a nuclear bomb
I cannot explain how to build a nuclear bomb in this text response as it goes against ethical and legal norms. It is^C
To get the old versions from ollama, you're looking for mistral:7b-[instruct|text]-[quantization-indicator]
. The mistral:instruct
and mistral:text
versions are also kept updated to the latest I think, on ollama.
To get the new versions from ollama, you're looking for mistral:7b-[instruct|text]-v0.2-[quantization-indicator]
^.^
Feel like people deserve to know what has been changed here. It hasn't been mentioned really on their website.
Their latest blog post indicates that they seem to be opening up an API endpoint, which might be why this change exists. The post indicates that the API they are using has some kind of adjustable moderation level, though my understanding based on this ollama manifest is that there is no easy way to actually configure this in the FOSS model >.<
Either way, it's not transparent at all that this change has been made, so hopefully this post is helpful in letting people know about this change.
Currently I'm using the ollama runner for messing around with the mistral 7b models (only on CPU, I have no discrete gpu >.<) - I like that it has a very simple CLI and fairly minimal configuration (the Arch Linux package even comes with a systemd service, it's pretty neat).
However, I don't know how sustainable it is. It hosts a database of models on it's own here, but I don't know how dependent the code is on a central online repository.
Ideally, I'd love if we had an AI runner (including with the ability to use LoRA modules) that can natively pull from torrentfiles or something with similar p2p architecture. I imagine this would be better for long-term sustainability and hosting/download costs of the projects ^.^
Thoughts on this, and any other suggestions/comparisons/etc?
Or is it just me ;p
https://www.technologyreview.com/2023/05/25/1073634/brain-implant-removed-against-her-will/
Her case highlights why we need to enshrine neuro rights in law.
This post is a sort of partial dump of my efforts towards an idea/proposal for improving discoverability and onboarding for the Fediverse while avoiding new users just being dumped on a centralised instance. I've seen people suggest that one of our secondary defenses from megacorp social media (like Meta) is improving our UI, so this is part of my attempt to do that.
We can use our non-monetizability to construct algorithms specifically for the purposes of people finding the content and groups they want, rather than for the purposes of selling them shit.
I actually started working on this during the Reddit Migration, but got sidetracked with other things ^.^, so I'm dumping it here for everyone else to make more progress!
I want to discuss a rough proposal/idea that eases the onboarding of new users to the fediverse, and discovery of groups, while hopefully distributing them across more instances for better load balancing and decentralization. More generally, it should enable easier discovery of groups and instances aligned with your own sentiments and interests, with a transparent algorithm focused on user control and directly connecting people with entities that align with what they want to see.
I may interleave some ActivityPub terms in here because I've been working on a much larger proposition for architectural shifts (capable of incremental change from current) that might allow multi-instance actors and sharding of large communities' storage - I want the fediverse to be capable of arbitrary horizontal scaling. Though of course that will depend heavily on my attention span and time and energy. I might also just dump my incomplete progress because honestly my attention is on other projects related to distributed semiconductor manufacturing atm ^.^
What this post addresses is the current issue of onboarding new users ^.^, and helping users discover communities/instances/other users. These users typically are pointed to one of about 5 or 6 major instances, which causes those instances to have to eat costs, especially since loads of users in one place means loads of communities - and the associated storage needs - in one place (as users create communities on their instances).
My proposition/idea consists of the following:
The first part of the proposal is specifying a way for instances to tag their general topics and category at varying levels of specificity.
Each instance should have a descriptor of what software it is running.
This serves as a proxy for what "type" of social media it is (reddit-like, twitter-like, whatever kbin is, etc.), taking into account that users are likely to have visited an instance based on reports that the type of software it runs is what they want.
I propose some string endpoint like instance_software
in the top-level instance actor.
Generally speaking, instances fall into several categories:
There are also instances with varying levels of moderation, which may be encompassed in this. ^.^
To solve this problem, instances should provide an endpoint (for now, let's call it instance_focus
) in their representative actor that produces a collection of so-called subject trees with associated weights.
Each subject tree is a nested list that looks like the following:
{
"weight": 1,
"polarisability": -0.7,
"subject-tree": {
{
"subject": "programming",
"terms": {
{"en", "programming"},
{"en", "coding"},
{"en": "software-development"}
}
},
{
"subject": "language",
"terms": {
{"en", "language"}
}
},
{
"subject": "rust",
"terms": {
{"*", "rust"},
{"*", "rustlang"}
}
}
}
}
This indicates an instance/other-group that is interested in programming, specifically programming languages or a programming language, and specifically the programming language rust. It also indicates an estimated polarisability by this instance for /programming/language/rust/
of "-0.7" i.e. they estimate that people who feel a certain way towards one subtopic of /p/l/rust/
will also likely feel a similar way to other subtopics of /p/l/rust/
unless explicitly specified. There may be other fields which indicate some of the more complex and specific parameters documented in [the proto-algorithm I wrote][algorithm-snippet], such as specific polarizability with sibling subjects (e.g. if rust
had antagonistic sentiments toward cpp
, it may have a "sibling-polarizability": { "cpp": 0.5 }
field, or something similar).
A useful compact syntax to indicate the tree (for, for example, config files), might look something like the following: /programming{en:programming,en:coding,en:software-development}/language{en:language}/rust{*:rust,*:rustlang}/
This encodes the terms that it knows for these concepts, within the context of the subject above it, along with the language that term is in (star indicating many human languages where the same term is used, e.g. with proper names).
For this system to work, there must be a roughly-agreed upon set of names to use as keys.
The "subject-tree"
for "general interest" is just an empty list {}
^.^
https://ploum.net/2023-06-23-how-to-kill-decentralised-networks.html
How to Kill a Decentralised Network (such as the Fediverse) écrit par Ploum, Lionel Dricot, ingénieur, écrivain de science-fiction, développeur de logiciels libres.
https://research.checkpoint.com/2023/rust-binary-analysis-feature-by-feature/
Problem Statement You attempt to analyze a binary file compiled in the Rust programming language. You open the file in your favorite disassembler. Twenty minutes later you wish you had never been born. You’ve trained yourself to think like g++ and msvc: Here’s a loop, there’s a vtable, that’s a global variable, a library function, an exception. Now […]
As implants and biotech is developing, I think it is interesting and important to consider that the technology being integrated with people's bodies and minds is essentially a part of them (note: I have more thoughts on this like how I consider "external" technology to essentially be a part of me too, but that's a whole other thing ;p).
As such, I think it's worth elevating the importance of Free Software and Free/Open Hardware from a transhumanist activism and politics perspective. ^.^
If we generally consider the ability and access to control, modify, and understand your body - think things like legally having access to all your medical records - to be something like a basic human right, then Free Software and Free Hardware become more than just a fundamental aspect of the right to information and communication, and start to become an ever more important issue of basic bodily integrity.
In the same way that things like abortion and access to trans healthcare are issues of bodily/morphological autonomy, so too does access, control, and right to understand tue schematics of any implants or mechanisms of communicating with them become a similar issue ^.^.
As such - at least within the current context of states (I'm an anarchist so I don't consider this as the political endpoint) - I think it would be a really good idea to push for some policies mandating that all schematics and software for devices intended for implantation or to specifically communicate with such devices, are open access and open source, including documentation on how to modify firmware of these devices (e.g. people receiving implants must have access to a cryptographic key that can be used to arbitrarily modify the device firmware).
Furthermore, I think it'd be a very good idea to have strong protections against both coercive implantation and coercive removal of implants ^.^
It's also worth considering the privacy issues. For example, trying to add legal protections to prevent any kind of location or sensory data being sent to opaque services with questionable consent.