I host my own Lemmy instance and have a user account on it that I use everywhere (I don't host local communities, I just use it as a home for my Lemmy user account). I needed to re-home my Lemmy server, and though it's a docker installation, copying the /var/lib/docker/volumes/lemmy_*
directories to the new installation didn't work. So I created a new Lemmy server.
How can I move my old account to the new server, so I can keep all my subscriptions and post/comment history?
I'm hoping someone can help me figure out what I'm doing wrong.
I have a VM on my local network that has Traefik, 2 apps (whomai and myapp), and wireguard in server mode (let's call this VM "server"). I have another VM on the same network with Traefik and wireguard in client mode (let's call this VM "client").
myapp.mydomain.com
as a host override on my router so every computer in my house points it to "client"curl -L --header 'Host: myapp.mydomain.com'
from the myapp container it successfully returns the myapp page.But when I browse to http://myapp.mydomain.com
I get "Internal Server Error", yet nothing appears in the docker logs for any app (neither traefik container, neither wireguard container, nor the myapp container).
Any suggestions/assistance would be appreciated!
I have input_text.event_1
where the value is currently "birthday", input_text.event_2
where the value is currently "christmas", input_date.event_1
where the value is currently "1/1/2000", and input_date.event_2
where the value is currently "12/25/2024". How do I configure voice assistant to recognize a phrase like "what's the date of birthday" and returns "1/1/2000"?
I'm guessing there's some combination of templating and "lists", but there are too many variables for me to continue guessing: conversations, intents, sentences, slots, lists, wildcards, yaml files...
I've tried variations of this in multiple files:
language: "en"
intents:
WhatsTheDateOf:
- "what's the date of {eventname}"
data:
- sentences:
- "what's the date of {eventname}"
lists:
eventname:
wildcard:
true
- "{{ states('input_text.event_1') }}"
- "{{ states('input_text.event_2') }}"
Should it be in conversations.yaml
, intent_scripts.yaml
, or a file in custom_sentences/en
? Or does the "lists" go in one file and "intents" go in another? In the intent, do I need to define my sentence twice?
I'd appreciate any help. I feel like once I see the yaml of a way that works, I'll be able to examine it and understand how to make derivations work in the future.
Hi. I self-host gitea in docker and have a few repos, users, keys, etc. I installed forgejo in docker and it runs, so I stopped the container and copied /var/lib/docker/volumes/gitea_data/_data/*
to /var/lib/docker/volumes/forgejo_data/_data/
, but when I restart the forgejo container, forgejo doesn't show any of my repos, users, keys, etc.
My understanding was the the current version of forgejo is a drop-in replacement for gitea, so I was hoping all gitea resources were saved to its docker volume and would thus be instantly usable by forgejo. Guess not. :(
Does anyone have any experience migrating their gitea instance to forgejo?
Howdy.
I have the following helpers:
input_text.countdown_date_01_name
input_datetime.countdown_date_01_date
,input_text.countdown_date_02_name
input_datetime.countdown_date_02_date
I want to be able to speak "how many days until X", where X is the value of either input_text.countdown_date_01_name
or input_text.countdown_date_02_name
, and have Home Assistant speak the response "there are Y days until X", where X is the value of either input_text.countdown_date_01_name
or input_text.countdown_date_02_name
, whichever was spoken.
I know how to determine the number of days until the date that is the value of input_datetime.countdown_date_01_date
or input_datetime.countdown_date_02_date
. But so far I've been unable to figure out how to configure the sentence/intent so that HA knows which one to retrieve the value of.
In config/conversations.yaml
I have:
intents:
HowManyDaysUntil:
- "how many days until {countdownname}"
In config/intents/sentences/en/_cmmon.yaml
I have:
lists:
countdownname:
values:
- '{{ states("input_text.countdown_date_01_name") }}'
- '{{ states("input_text.countdown_date_02_name") }}'
In config/intent_scripts.yaml
I have:
HowManyDaysUntil:
action:
service: automation.trigger
data:
entity_id: automation.how_many_days_until_countdown01
(this automation currently is hardocded to calculate and speak the days until input_datetime.countdown_date_01_date
)
The values of my helpers are currently:
input_text.countdown_date_01_name
= "vacation"input_datetime.countdown_date_01_date
= "6/1/2024'When I speak "how many days until vacation" I get Unexpected error during intent recognition
.
I'd appreciate your help with this!
I can't log into my Spotify account. I get "Incorrect username or password." I'm using my email address for my login.
I clicked "Forgot password" and entered my email address and Spotify said "Email sent. We sent you an email. Follow the instructions to get back into your account." But I didn't receive that email. I waited more than 24 hours, then tried again a couple times. It's not in my spam/junk folder either.
I tried creating a new account with the same email address, but Spotify says "This address is already linked to an existing account. To continue, log in."
Spotify's "reset password" FAQ doesn't cover this situation.
I clicked "Contact Spotify" from the footer of their support pages, and they offer support by sending them a message, contacting them on X or Facebook, or asking for support in their support community. I don't have an X or Facebook account, and when I click to send them a message they require me to log in! I visited their support community and typed my issue, but when I clicked "Post" to submit my issue they require me to log in!
Does anyone know how to contact a human at Spotify?
Thanks for assistance.
I have some of the ATOM Echos that HA describes here. They work for voice recognition but the speaker in these tiny boxes is...tiny. It's barely audible when standing right next to the box, and completely inaudible when standing 10 feet away or if there is noise in the room.
Examples of the voice responses I'm talking about are "I'm sorry but I don't understand that" or "The current time is 2:15pm" or "I turned on the lights in the living room."
Is it possible to re-route the voice responses to a different media player? Currently, I have a Google Home Mini in each room that I have an ATOM Echo in. It would be nice if I could somehow determine which ECHO received the voice command, which area that Echo was in (e.g., "living room"), and then re-route the voice response to a media player in that area.
But I have no idea how to do this.
I have a robot vaccum that sends an alert to HA when it's done cleaning or when it encounters a problem. How can I intercept or re-route those notificiations? I want to post them to Matrix, which I do have an integration for.
Thanks for assistance.
I've had a problem for a year or more, so that's through numerous Home Assistant updates: I have about 15 automations that I've disabled, but they always become enabled again within a few days. I haven't been able to determine a trigger for the re-enabling.
Has anyone else encountered this? Does anyone have a suggestion?
I'm trying to find a new projector for my home theater. I don't need high end, but it should be at least 1080p (i.e., 4K isnt necessary). I mention that because of my budget: I'm looking for something around $500, but I might be able to go up to $1000. The other main requirement is that I'm able to turn it on/off via Home Assistant.
Other nice features to have, but nott requirements:
Thansk for your suggestions.
@mike_wooskey
@lemmy.d.thewooskeys.com