Loudspeaker Network & Audio Alerts with Home Assistant

Loudspeaker Network & Audio Alerts with Home Assistant

the doorbell broadcasts to the loudspeaker network via node red

Aside from playing music, a multi-room audio system is also capable of becoming a loudspeaker network. Using Home Assistant, it’s easy to broadcast audio alerts to the entire household.

Playing a Wav File in Home Assistant

The simplest approach is to use a shell script.

I keep all my audio alert files in the /config/audio directory, so that Home Assistant can access them. Then I simply pipe the audio file into a fifo:

shell_command:
  play_doorbell_alert: 'cat /config/audio/doorbell.wav > /tmp/snapfifo-loudspeaker'

Thanks to the multi-room audio setup, this file is broadcast to the different speakers.

Home Assistant will need access to the /tmp/ directory.

This should be the same directory that the snapserver has access to.

This only works, though, when the audio source is set correctly…

Switching Audio Inputs

What if the speakers are also used for music?

You might notice that the audio is piped into a file named /tmp/snapfifo-loudspeaker. This is an audio stream dedicated to alerts and broadcasts for the entire house. The Airplay and Spotify inputs go into two other audio streams. If one of those streams is playing, it is necessary to:

  1. Save the name of the current stream.
  2. Change the stream to the “loudspeaker.”
  3. Play the audio alert.
  4. Change the stream back to the original stream.

To accomplish all this, I used Node RED. It is certainly possible to do with just Home Assistant, but far more complicated. Besides, the alerts are usually triggered from Node Red (or similar) — such as the DIY doorbell.

The following Node Red flow can be triggered by a “link out” node. Before calling the flow, set the msg.audio_file to the name of the alert (doorbell, in the above example). Then set the msg.payload to an array of each of the snapcast media clients (or groups) to play the alert (for an example refer to the screenshot at the top of this post).

[{"id":"88ab5868.a9c298","type":"tab","label":"Play Alert","disabled":false,"info":""},{"id":"24aa9078.3fd3c","type":"function","z":"88ab5868.a9c298","name":"prepare","func":"msg.group = msg.payload\nmsg.media_player = 'media_player.' + msg.group\nmsg.payload = { 'entity_id': msg.media_player }\nreturn msg;","outputs":1,"noerr":0,"x":280,"y":100,"wires":[["b89ced91.6f43b"]]},{"id":"b89ced91.6f43b","type":"api-current-state","z":"88ab5868.a9c298","name":"state","server":"2da36d5.8fe2592","version":1,"outputs":1,"halt_if":"","halt_if_type":"str","halt_if_compare":"is","override_topic":false,"entity_id":"","state_type":"str","state_location":"payload","override_payload":"msg","entity_location":"data","override_data":"msg","blockInputOverrides":false,"x":410,"y":100,"wires":[["aeb7c9da.7382b8"]]},{"id":"e31b0c3e.0f70d","type":"split","z":"88ab5868.a9c298","name":"","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":150,"y":100,"wires":[["24aa9078.3fd3c"]]},{"id":"aeb7c9da.7382b8","type":"api-call-service","z":"88ab5868.a9c298","name":"select loudspeaker","server":"2da36d5.8fe2592","version":1,"debugenabled":false,"service_domain":"media_player","service":"select_source","entityId":"{{media_player}}","data":"{\"source\":\"loudspeaker\"}","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":590,"y":100,"wires":[["af6904b.08531f8"]]},{"id":"11cf1226.d188fe","type":"api-call-service","z":"88ab5868.a9c298","name":"restore source","server":"2da36d5.8fe2592","version":1,"debugenabled":false,"service_domain":"media_player","service":"select_source","entityId":"{{media_player}}","data":"{\"source\":data.attributes.source}","dataType":"jsonata","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":580,"y":160,"wires":[[]]},{"id":"6d2647d2.9609d8","type":"api-call-service","z":"88ab5868.a9c298","name":"play","server":"2da36d5.8fe2592","version":1,"debugenabled":false,"service_domain":"shell_command","service":"play_{{audio_file}}_alert","entityId":"","data":"","dataType":"json","mergecontext":"","output_location":"","output_location_type":"none","mustacheAltTags":false,"x":150,"y":160,"wires":[["f1ddbfd7.2644b"]]},{"id":"f1ddbfd7.2644b","type":"split","z":"88ab5868.a9c298","name":"","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":270,"y":160,"wires":[["e151d94d.42d478"]]},{"id":"e151d94d.42d478","type":"function","z":"88ab5868.a9c298","name":"prepare","func":"msg.group = msg.payload\nmsg.media_player = 'media_player.' + msg.group\nmsg.payload = { 'entity_id': msg.media_player }\nreturn msg;","outputs":1,"noerr":0,"x":420,"y":160,"wires":[["11cf1226.d188fe"]]},{"id":"a29b00b7.eb1e4","type":"link in","z":"88ab5868.a9c298","name":"","links":["b29677d6.5876d8"],"x":55,"y":100,"wires":[["e31b0c3e.0f70d"]]},{"id":"af6904b.08531f8","type":"join","z":"88ab5868.a9c298","name":"","mode":"auto","build":"string","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":750,"y":100,"wires":[["6d2647d2.9609d8"]]},{"id":"10a6976a.d95a19","type":"function","z":"88ab5868.a9c298","name":"alert config","func":"msg.payload = ['snapcast_client_commons']\nmsg.audio_file = 'visitor'\nreturn msg;","outputs":1,"noerr":0,"x":270,"y":240,"wires":[["e31b0c3e.0f70d"]]},{"id":"47e83837.bc8928","type":"inject","z":"88ab5868.a9c298","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":100,"y":240,"wires":[["10a6976a.d95a19"]]},{"id":"2da36d5.8fe2592","type":"server","z":"","name":"Cabin","legacy":false,"addon":false,"rejectUnauthorizedCerts":true,"ha_boolean":"y|yes|true|on|home|open","connectionDelay":true,"cacheJson":true}]

Microphones & Intercoms

The same principle applies.

For example, if someone is speaking into the microphone at the doorbell, the audio stream simply needs to be piped into the /tmp/snapcast-loudspeaker fifo. However, this raises two new concerns:

  • Home -> intercom: a true “intercom” is bi-directional (the person outside needs to hear us).
  • Intercom -> home: the microphone might be on a different device than Home Assistant.

Given that we’re already using snapcast, it is probably simplest to solve these problems with the same tools. The first problem can be solved by adding another stream. In this example, the speaker on the doorbell would subscribe to the /tmp/snapfifo-doorbell. Now, any microphone input piped into the snapserver on this stream will play through the doorbell/intercom speaker.

The second problem is tricky. At first I had hoped to use a NFS or other shared mechanism to pipe the data, but unfortunately this is not possible with fifos. Using snapcast instead requires a second snapserver — on the intercom itself. Doing so allows the microphone to broadcast itself over its own audio stream on the local network. Now, any audio piped into the intercom’s microphone is available to the whole network. But since this is a different snapserver, it’s not as easy to just change the source over for a given speaker. Instead, the Home Assistant server can run a snapclient which outputs to its own fifo… which is then picked up by its own snapserver.

Build Guides

Looking for even more detail?

Drop your email in the form below and you'll receive links to the individual build-guides and projects on this site, as well as updates with the newest projects.

... but this site has no paywalls. If you do choose to sign up for this mailing list I promise I'll keep the content worth your time.

Written by
(zane) / Technically Wizardry
Join the discussion

4 comments
  • Home Assistant has a service snapcast.snapshot, which will snapshot the volume and strem of any specified snapcast media_players. Once the interruption is done being played, you can fire off snapcast.restore to resume. This may be a bit cleaner 🙂

    Great site btw

    • Good point! I did see that; I didn’t think of using it for the interruption bit. The reason I did not use it was because each of my streams seem to have a slightly different latency for the speakers. This means that I actually save each speaker’s latency for each stream and have a script to restore them all when the stream switches. However, since interruptions go back to the same stream, it shouldn’t be a problem.

      And thank you! It’s a labor of love.

  • Hi Zane,
    I’m trying to stream radio music using Home Assistant. My speaker is plugged to my raspberry pi. The spotify plugin is working flawlessly and plays media directly on my speaker.
    However I don’t have an “entity_id” for the speaker, so how can i output radio stream (or any mp3 audio really) onto my “non-smart” speaker?
    Thanks!
    — Tony

Menu