-
Notifications
You must be signed in to change notification settings - Fork 673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cant update to 2023.11.3 #4709
Comments
@jensjakobandersen It most definitely exists: It has quite the downloads already as well. You can see/test for yourself by taking the image name from the log you shared and put If this all fails, you might be dealing with corruption in your filesystem, or maybe you are simply running out of diskspace? You could try running ../Frenck |
Thank you. I ran the ha supervisor repair - it ran without any problems. Can you explain me this part of the error message? I dont run docker (I guess), I run the native HAOS on Odroid. |
It means the local image cannot be found (the 404 means, not found on your system). |
Yes,and the path,1.43 etc does not exist locally
tir. 14. nov. 2023 23.36 skrev Franck Nijhof ***@***.***>:
… Can you explain me this part of the error message?
It means the local image cannot be found (the 404 means, not found on your
system).
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXRGOBGHYNLWJ65I4AO3YEPXABAVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJRGUYDEMJSHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I have the same issue. Diskspace at 13% and the (very scary) ha supervisor repair didn't fix it. |
Same here. I got 6Gb free space |
Same issue for me. I tried running the "ha supervisor repair", which completed successfully but I still cannot do the update. I am only using 6% of my disk space, so that's not the issue. I can access https://ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3 from the browser. Any other ideas for troubleshooting where the problem is? |
@jensjakobandersen can you still reproduce that problem? @bazb @xgolubev @GJSchroeder do you see the very same symptoms ( |
@agners Yes, same logs.
|
@agners Yes, same error log for me as well. The only thing I can see that might be related is I installed HA on a new HA Yellow, and restored my backup from my HA running on Pi. That was the 2023.10.5 version. Then I tried to upgrade to 2023.11 and have never been able to because of this error. Everything else works fine though after the migration to Yellow.
|
Yes, still get the same error.
Running on Supervisor 2023.10.1 - can't upgrade to neither 2023.11.1, 11.2
nor 11.3.
Anything changed in how docker finds images?
Anything I can do from the console to help with error identification?
man. 20. nov. 2023 14.36 skrev Stefan Agner ***@***.***>:
… @jensjakobandersen <https://github.com/jensjakobandersen> can you still
reproduce that problem?
@bazb <https://github.com/bazb> @xgolubev <https://github.com/xgolubev>
@GJSchroeder <https://github.com/GJSchroeder> do you see the very same
symptoms (Downloading docker image
ghcr.io/home-assistant/aarch64-hassio-supervisor with tag 2023.11.3.
followed by 23-11-14 21:13:57 ERROR (MainThread)
[supervisor.docker.interface] Can't install
ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3: 404 Client
Error for http+docker://localhost/v1.43/images/
ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3/json: Not
Found ("No such image:
ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3")? Can you
share maybe your Supervisor log as well to get a bit wider perspective on
this issue?
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXREEC4XGZ7LUROVUD63YFNMEZAVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJZGA3TONZVGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I got this in the Home Assistant Core log as well: [281473038033344] Error updating Home Assistant Supervisor: Update of Supervisor failed: Can't install ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3/json: Not Found ("No such image: ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.3") The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Similar error here. I can't update Core from 2023.10.5 to 2023.11.3 - or update any other Addon - on my HAOS on a RasPi3. Core Logs
Supervisor Logs
I can access https://github.com/home-assistant/plugin-cli/pkgs/container/aarch64-hassio-cli via Browser. Tried several restarts, HA and host itself. I see in my logs it says Server Error whereas the other logs say Client Error. Is this related? Or can anyone point me in the right direction? |
Hello guys i'm a Home Assistant new user, have the same problem |
Quite interesting.
I have 8.8.8.8 and 8.8.4.4
When I try to remove 8.8.4.4 and Save, I get this error.
I will open a new case on that error, but it might be a lead on the
Supervisor update error.
'
23-11-23 06:07:51 ERROR (MainThread) [asyncio] Task exception was
never retrieved
future: <Task finished name='Task-509'
coro=<DBus.sync_property_changes.<locals>.sync_property_change() done,
defined at /usr/src/supervisor/supervisor/utils/dbus.py:208>
exception=DBusFatalError('Object does not exist at path
“/org/freedesktop/NetworkManager/ActiveConnection/2”')>
Traceback (most recent call last):
File "/usr/src/supervisor/supervisor/utils/dbus.py", line 101, in call_dbus
return await getattr(proxy_interface, method)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"
tor. 23. nov. 2023 03.46 skrev nabob67 ***@***.***>:
… Hello guys i'm a Home Assistant new user, have the same problem
and resolve it by using dns 8.8.8.8 instead of my classic 192.168.1.1
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXRGAQZLLZJKCMQ4F6M3YF22HZAVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRTG42TSNZYGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jensjakobandersen what type of installation are you using? Are there maybe errors related to NetworkManager in the host logs? |
What would be interesting here is the host logs of Docker after a failed attempt. You can use the following command to get them |
Odroid N2+, haos.
WiFi on usb stick.
Will look at getting more logs on the issue
tor. 23. nov. 2023 10.37 skrev Stefan Agner ***@***.***>:
… @jensjakobandersen <https://github.com/jensjakobandersen> what type of
installation are you using? Are there maybe errors related to
NetworkManager in the host logs?
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXRH6RJ3ZPQF5CDUQR7DYF4KOJAVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRUGA3TCNZYHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
The dockerd logfile after trying to change DNS from 8.8.8.8,8.8.4.4 to
8.8.8.8 only.
Nov 23 05:04:53 Plantagen dockerd[505]:
time="2023-11-23T05:04:53.426373443Z" level=info msg="ignoring event"
container=140cf870147945ba917e5fa4b396555e7813210813821cd847257a2054304c40
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 05:04:53 Plantagen dockerd[505]:
time="2023-11-23T05:04:53.453707759Z" level=warning msg="ShouldRestart
failed, container will not be restarted"
container=140cf870147945ba917e5fa4b396555e7813210813821cd847257a2054304c40
daemonShuttingDown=true error="restart canceled"
execDuration=21h54m5.782414426s exitStatus="{2 2023-11-23 05:04:53.40345534
+0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Nov 23 05:04:53 Plantagen dockerd[505]:
time="2023-11-23T05:04:53.533168709Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.6: invalid argument"
Nov 23 05:04:53 Plantagen dockerd[505]:
time="2023-11-23T05:04:53.659294281Z" level=info msg="stopping event stream
following graceful shutdown" error="<nil>" module=libcontainerd
namespace=moby
Nov 23 05:04:53 Plantagen dockerd[505]:
time="2023-11-23T05:04:53.659890351Z" level=info msg="Daemon shutdown
complete"
Nov 23 05:04:53 Plantagen dockerd[505]:
time="2023-11-23T05:04:53.660074860Z" level=info msg="stopping event stream
following graceful shutdown" error="context canceled" module=libcontainerd
namespace=plugins.moby
Nov 23 05:05:22 Plantagen dockerd[496]:
time="2023-11-23T05:05:22.501483300Z" level=info msg="Starting up"
Nov 23 05:05:22 Plantagen dockerd[496]:
time="2023-11-23T05:05:22.501607966Z" level=warning msg="Running
experimental build"
Nov 23 05:05:22 Plantagen dockerd[496]:
time="2023-11-23T05:05:22.571184675Z" level=info msg="[graphdriver] trying
configured driver: overlay2"
Nov 23 05:05:22 Plantagen dockerd[496]:
time="2023-11-23T05:05:22.812043342Z" level=info msg="Loading containers:
start."
Nov 23 05:05:23 Plantagen dockerd[496]:
time="2023-11-23T05:05:23.763048509Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.6: invalid argument"
Nov 23 05:05:24 Plantagen dockerd[496]:
time="2023-11-23T05:05:24.260684731Z" level=info msg="Loading containers:
done."
Nov 23 05:05:24 Plantagen dockerd[496]:
time="2023-11-23T05:05:24.341675285Z" level=info msg="Docker daemon"
commit=buildroot graphdriver=overlay2 version=24.0.6
Nov 23 05:05:24 Plantagen dockerd[496]:
time="2023-11-23T05:05:24.342792316Z" level=info msg="Daemon has completed
initialization"
Nov 23 05:05:24 Plantagen dockerd[496]:
time="2023-11-23T05:05:24.387085538Z" level=info msg="API listen on
/run/docker.sock"
Nov 23 05:05:24 Plantagen dockerd[496]:
time="2023-11-23T05:05:24.832531688Z" level=warning msg="Failed to delete
conntrack state for 172.30.232.2: invalid argument"
Nov 23 05:05:28 Plantagen dockerd[496]:
time="2023-11-23T05:05:28.496881940Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.5: invalid argument"
Nov 23 05:05:29 Plantagen dockerd[496]:
time="2023-11-23T05:05:29.019418334Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.3: invalid argument"
Nov 23 05:05:29 Plantagen dockerd[496]:
time="2023-11-23T05:05:29.537408285Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.4: invalid argument"
Nov 23 05:05:36 Plantagen dockerd[496]:
time="2023-11-23T05:05:36.992290965Z" level=error msg="error unmounting
/mnt/data/docker/overlay2/76bf2bd57cb4239d701ca6353f80babb68e5913aa0da21713d386028499c94d9/merged:
invalid argument" storage-driver=overlay2
Nov 23 05:05:51 Plantagen dockerd[496]:
time="2023-11-23T05:05:51.921238624Z" level=info msg="Attempting next
endpoint for pull after error: failed to register layer: error creating
overlay mount to
/mnt/data/docker/overlay2/76bf2bd57cb4239d701ca6353f80babb68e5913aa0da21713d386028499c94d9/merged:
too many levels of symbolic links"
Nov 23 05:05:52 Plantagen dockerd[496]:
time="2023-11-23T05:05:52.132590423Z" level=warning msg="Failed to delete
conntrack state for 172.30.33.0: invalid argument"
Nov 23 05:09:54 Plantagen dockerd[505]:
time="2023-11-23T05:09:54.619826008Z" level=info msg="Starting up"
Nov 23 05:09:54 Plantagen dockerd[505]:
time="2023-11-23T05:09:54.619934050Z" level=warning msg="Running
experimental build"
Nov 23 05:09:54 Plantagen dockerd[505]:
time="2023-11-23T05:09:54.712395967Z" level=info msg="[graphdriver] trying
configured driver: overlay2"
Nov 23 05:09:54 Plantagen dockerd[505]:
time="2023-11-23T05:09:54.945826633Z" level=info msg="Loading containers:
start."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.652296817Z" level=info msg="Removing stale
sandbox d6e340156d55bbf4d879325fd5bd83fa5d4e1afd66d9506e99fef53e856b6d83
(830403004d1fb6a75f6a5fa320b55797e0f336637b90003a2e28dd2d517a2cee)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.658231371Z" level=warning msg="Failed to delete
conntrack state for 172.30.33.0: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.666905985Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
cf2b8c8de1314916307d5baaa446d818a522b14870b904e098bc5042a20a77f8
f465df5731d8db62c68335f16983a9478125521eb42452ff247fb76cc6f2f756],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.678529663Z" level=info msg="Removing stale
sandbox d77ac53d7e49b308a8411b34bd5a5b26b4ad07a44f03000e8b10e4b5ff6f6e93
(a85fc2b4ec82c93a386c96eb51815a902dd1c1182e984df3be6fc5c5a6a0c78e)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.682104958Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
6c96a80103630242eaf6763b17055cc169c908ef3c8f9e2b5e8450598a6ef5b5
176e39777a7b50464be7719c8a7e9b7c6916f20322f6ef2410eb3000bfaafdfd],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.688433727Z" level=info msg="Removing stale
sandbox ee89f2f08df1088f4bb4460ff03115ef607a45a647b4e02ca4ad89797fa72320
(431fb4c1ca71936b15bf6f877dc04be1c3ba2f92b4c6dfb432bb5ce2974220e0)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.690741873Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
6c96a80103630242eaf6763b17055cc169c908ef3c8f9e2b5e8450598a6ef5b5
48804cf43121c5a02a688429f5d2f13bcf59884d823ba69dcca7bf74f8786082],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.697212412Z" level=info msg="Removing stale
sandbox f101a6f2ca0d45ec579cb8a0479221834e11f6e96cc7c5f0aaa8046dfe35e995
(62285616862ae9fda80c0ee5ecc6e94a135c8ec6e93a0d7810809383662e0d6d)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.701168426Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.3: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.704574631Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
cf2b8c8de1314916307d5baaa446d818a522b14870b904e098bc5042a20a77f8
38e09ab661037ac941203311d46896fe73d77e98e0ad75449a4d00652928ecf7],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.712834971Z" level=info msg="Removing stale
sandbox 3438c0240c25f52db8fe6e605611c22162aed8bdf2eb13f759dc98264927e1b2
(626c06ed04eed481cafdebdf5183b045fc1508b0788fa7ca5936ed4c0fc71f84)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.716794513Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.5: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.720340642Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
cf2b8c8de1314916307d5baaa446d818a522b14870b904e098bc5042a20a77f8
a4412524c22756008f10d390f59db9c93b13a3a4b1cbb6805338247fc7b733de],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.729727386Z" level=info msg="Removing stale
sandbox 458fc2b3e0fec52da44120c9eceecf6b67175da226b2eb49615c22da200bb344
(f0020d735b1549078fa61ca3bae252c5efd748e329f09fd0d60ca914dcdfa4bc)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.733675647Z" level=warning msg="Failed to delete
conntrack state for 172.30.232.2: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.740390754Z" level=warning msg="Failed to clean up
network resources on container hassio_supervisor disconnect: failed to set
gateway while updating gateway: route for the gateway 172.30.32.1 could not
be found: network is unreachable"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.746350373Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.2: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.748080488Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
e1821db939a03e881a8f28713016cc9725c1a4dc9a478251b1d4de7e0f280bd0
1302e4a5e5bc659cfc75aafc663ae443a2eaa45322cde960f836fd5ad29c2321],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.755399060Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.2: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.758737703Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
cf2b8c8de1314916307d5baaa446d818a522b14870b904e098bc5042a20a77f8
4a6d7c240383accf95b196590cef063249a57c875af4d7cbcc57877e6f266e04],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.767475326Z" level=info msg="Removing stale
sandbox 8b8c394e93d2b428e560b50193c35a6c27e84931342ef37d7c275f7cd3717eca
(d31210eeea5b7a13bf25c5014e3aaacfcb953471e2c23af5e1915fc69f80473d)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.771711064Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.4: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.775163870Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
cf2b8c8de1314916307d5baaa446d818a522b14870b904e098bc5042a20a77f8
86a495a4943bf000cc7995cbeaf35553256e64c8ee0b66eefda66b87dda73c22],
retrying...."
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.783252618Z" level=info msg="Removing stale
sandbox c06476430ecfc1d72a2d4904b0f064825b2c6e1bf910171f1a4d3331cd0d35a1
(140cf870147945ba917e5fa4b396555e7813210813821cd847257a2054304c40)"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.856612723Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.6: invalid argument"
Nov 23 05:09:55 Plantagen dockerd[505]:
time="2023-11-23T05:09:55.860336023Z" level=warning msg="Error (Unable to
complete atomic operation, key modified) deleting object [endpoint
cf2b8c8de1314916307d5baaa446d818a522b14870b904e098bc5042a20a77f8
cd8c68977dfa97115938c4fa904c8b7304509401be84f820302c320d2f1020f6],
retrying...."
Nov 23 05:09:56 Plantagen dockerd[505]:
time="2023-11-23T05:09:56.329073740Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.6: invalid argument"
Nov 23 05:09:56 Plantagen dockerd[505]:
time="2023-11-23T05:09:56.815200230Z" level=info msg="Loading containers:
done."
Nov 23 05:09:56 Plantagen dockerd[505]:
time="2023-11-23T05:09:56.893355376Z" level=info msg="Docker daemon"
commit=buildroot graphdriver=overlay2 version=24.0.6
Nov 23 05:09:56 Plantagen dockerd[505]:
time="2023-11-23T05:09:56.895005412Z" level=info msg="Daemon has completed
initialization"
Nov 23 05:09:56 Plantagen dockerd[505]:
time="2023-11-23T05:09:56.945532515Z" level=info msg="API listen on
/run/docker.sock"
Nov 23 05:09:57 Plantagen dockerd[505]:
time="2023-11-23T05:09:57.257557043Z" level=warning msg="Failed to delete
conntrack state for 172.30.232.2: invalid argument"
Nov 23 05:10:01 Plantagen dockerd[505]:
time="2023-11-23T05:10:01.004396113Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.5: invalid argument"
Nov 23 05:10:01 Plantagen dockerd[505]:
time="2023-11-23T05:10:01.603779122Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.3: invalid argument"
Nov 23 05:10:02 Plantagen dockerd[505]:
time="2023-11-23T05:10:02.145216639Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.4: invalid argument"
Nov 23 05:10:13 Plantagen dockerd[505]:
time="2023-11-23T05:10:13.371341620Z" level=error msg="error unmounting
/mnt/data/docker/overlay2/1ddfbfadcbd626b19222b7461b187ca30892d84ceb1504c9d04e20aec49e58b3/merged:
invalid argument" storage-driver=overlay2
Nov 23 05:10:26 Plantagen dockerd[505]:
time="2023-11-23T05:10:26.210788310Z" level=info msg="Attempting next
endpoint for pull after error: failed to register layer: error creating
overlay mount to
/mnt/data/docker/overlay2/1ddfbfadcbd626b19222b7461b187ca30892d84ceb1504c9d04e20aec49e58b3/merged:
too many levels of symbolic links"
Nov 23 05:10:26 Plantagen dockerd[505]:
time="2023-11-23T05:10:26.534389817Z" level=warning msg="Failed to delete
conntrack state for 172.30.33.0: invalid argument"
Nov 23 12:21:05 Plantagen dockerd[505]:
time="2023-11-23T12:21:05.077714719Z" level=info msg="ignoring event"
container=a85fc2b4ec82c93a386c96eb51815a902dd1c1182e984df3be6fc5c5a6a0c78e
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:12 Plantagen dockerd[505]:
time="2023-11-23T12:21:12.012120170Z" level=info msg="ignoring event"
container=ade04b336fc7f163957a3a0fa949751e4a1e422e75dbf1e586a87cef61ca1e32
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:12 Plantagen dockerd[505]:
time="2023-11-23T12:21:12.052820135Z" level=warning msg="Failed to delete
conntrack state for 172.30.33.0: invalid argument"
Nov 23 12:21:16 Plantagen dockerd[505]:
time="2023-11-23T12:21:16.328862927Z" level=info msg="ignoring event"
container=4d85bdb95dd98af43fc7c1723b1909591aef36834956a1b09c7778d9422df0c5
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:16 Plantagen dockerd[505]:
time="2023-11-23T12:21:16.367870263Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.5: invalid argument"
Nov 23 12:21:20 Plantagen dockerd[505]:
time="2023-11-23T12:21:20.700305573Z" level=info msg="ignoring event"
container=a0daaed549bd2c2eaca04fd0460286b38ad24b31bca7afae88929b42e9abcaf7
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:20 Plantagen dockerd[505]:
time="2023-11-23T12:21:20.738159600Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.3: invalid argument"
Nov 23 12:21:25 Plantagen dockerd[505]:
time="2023-11-23T12:21:25.090771568Z" level=info msg="ignoring event"
container=cf639796f87611196b308589c8d3419663f5becdb34c080892ad9ab75c35eb0f
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:25 Plantagen dockerd[505]:
time="2023-11-23T12:21:25.123377617Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.4: invalid argument"
Nov 23 12:21:29 Plantagen dockerd[505]:
time="2023-11-23T12:21:29.423967454Z" level=info msg="ignoring event"
container=1cce0db9a5f571b341d3dad1ce07908efaeb747e4018eeba68eae5d4cfc21846
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.531863735Z" level=info msg="ignoring event"
container=f0020d735b1549078fa61ca3bae252c5efd748e329f09fd0d60ca914dcdfa4bc
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.567468605Z" level=warning msg="Failed to delete
conntrack state for 172.30.232.2: invalid argument"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.613881405Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.2: invalid argument"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.684281977Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.2: invalid argument"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.890507606Z" level=info msg="Processing signal
'terminated'"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.935012766Z" level=info msg="ignoring event"
container=140cf870147945ba917e5fa4b396555e7813210813821cd847257a2054304c40
module=libcontainerd namespace=moby topic=/tasks/delete
type="*events.TaskDelete"
Nov 23 12:21:34 Plantagen dockerd[505]:
time="2023-11-23T12:21:34.968519362Z" level=warning msg="ShouldRestart
failed, container will not be restarted"
container=140cf870147945ba917e5fa4b396555e7813210813821cd847257a2054304c40
daemonShuttingDown=true error="restart canceled"
execDuration=7h11m38.168230207s exitStatus="{2 2023-11-23
12:21:34.902069783 +0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Nov 23 12:21:35 Plantagen dockerd[505]:
time="2023-11-23T12:21:35.047348075Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.6: invalid argument"
Nov 23 12:21:35 Plantagen dockerd[505]:
time="2023-11-23T12:21:35.174782743Z" level=info msg="stopping event stream
following graceful shutdown" error="<nil>" module=libcontainerd
namespace=moby
Nov 23 12:21:35 Plantagen dockerd[505]:
time="2023-11-23T12:21:35.175402233Z" level=info msg="Daemon shutdown
complete"
Nov 23 12:22:03 Plantagen dockerd[505]:
time="2023-11-23T12:22:03.995925069Z" level=info msg="Starting up"
Nov 23 12:22:03 Plantagen dockerd[505]:
time="2023-11-23T12:22:03.996043735Z" level=warning msg="Running
experimental build"
Nov 23 12:22:04 Plantagen dockerd[505]:
time="2023-11-23T12:22:04.081034111Z" level=info msg="[graphdriver] trying
configured driver: overlay2"
Nov 23 12:22:04 Plantagen dockerd[505]:
time="2023-11-23T12:22:04.308809194Z" level=info msg="Loading containers:
start."
Nov 23 12:22:05 Plantagen dockerd[505]:
time="2023-11-23T12:22:05.308626236Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.6: invalid argument"
Nov 23 12:22:05 Plantagen dockerd[505]:
time="2023-11-23T12:22:05.831420403Z" level=info msg="Loading containers:
done."
Nov 23 12:22:05 Plantagen dockerd[505]:
time="2023-11-23T12:22:05.912844028Z" level=info msg="Docker daemon"
commit=buildroot graphdriver=overlay2 version=24.0.6
Nov 23 12:22:05 Plantagen dockerd[505]:
time="2023-11-23T12:22:05.913812153Z" level=info msg="Daemon has completed
initialization"
Nov 23 12:22:05 Plantagen dockerd[505]:
time="2023-11-23T12:22:05.959385486Z" level=info msg="API listen on
/run/docker.sock"
Nov 23 12:22:06 Plantagen dockerd[505]:
time="2023-11-23T12:22:06.309923951Z" level=warning msg="Failed to delete
conntrack state for 172.30.232.2: invalid argument"
Nov 23 12:22:10 Plantagen dockerd[505]:
time="2023-11-23T12:22:10.132526982Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.5: invalid argument"
Nov 23 12:22:10 Plantagen dockerd[505]:
time="2023-11-23T12:22:10.629140268Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.3: invalid argument"
Nov 23 12:22:11 Plantagen dockerd[505]:
time="2023-11-23T12:22:11.139252072Z" level=warning msg="Failed to delete
conntrack state for 172.30.32.4: invalid argument"
Nov 23 12:22:22 Plantagen dockerd[505]:
time="2023-11-23T12:22:22.117374763Z" level=error msg="error unmounting
/mnt/data/docker/overlay2/6480d8c0b69c2a78b5383ac9958456fd8e6b723eb7bddd50f916c72c815f8fec/merged:
invalid argument" storage-driver=overlay2
Nov 23 12:22:42 Plantagen dockerd[505]:
time="2023-11-23T12:22:42.971156013Z" level=info msg="Attempting next
endpoint for pull after error: failed to register layer: error creating
overlay mount to
/mnt/data/docker/overlay2/6480d8c0b69c2a78b5383ac9958456fd8e6b723eb7bddd50f916c72c815f8fec/merged:
too many levels of symbolic links"
Nov 23 12:22:43 Plantagen dockerd[505]:
time="2023-11-23T12:22:43.202009142Z" level=warning msg="Failed to delete
conntrack state for 172.30.33.0: invalid argument"
Nov 23 12:29:38 Plantagen dockerd[505]:
time="2023-11-23T12:29:38.768748528Z" level=warning msg="Failed to delete
conntrack state for 172.30.33.1: invalid argument"
Den tors. 23. nov. 2023 kl. 10.37 skrev Stefan Agner <
***@***.***>:
… @jensjakobandersen <https://github.com/jensjakobandersen> what type of
installation are you using? Are there maybe errors related to
NetworkManager in the host logs?
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXRH6RJ3ZPQF5CDUQR7DYF4KOJAVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRUGA3TCNZYHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
hi, the new version 2023.11.3 after 2 hours i have break because its hang and not update finish. i start HA and the version is 2023.11.2 but supervisor is 2023.11.3 what is wrong with this build? what can i do? ps.: i used linux mx debian kde plasma and home asistant in virtual machine..oracle |
Supervisor 2023.11.6 is now on the stable channel. If you encounter the problem again, please check the following things:
Is
Are there any issues listed? |
@agners It still failed for me again trying to update to 2023.11.6. I ran ha network info and it showed both host_internet and supervisor_internet set to true. I would include the entire output but got too frustrated trying to figure out how to copy/paste the output from a terminal window....what a mess that is. I ran ha resolution info and got the following: [core-ssh ~]$ ha resolution info
|
To copy from the terminal window I:
1. Run the command and pipe to a text file
2. Sftp that textfile over to my desktop
Alternative is to set up SSH login and login via ssh from desktop - and
then easily copy paste
ons. 29. nov. 2023 12.40 skrev GJSchroeder ***@***.***>:
… @agners <https://github.com/agners> It still failed for me again trying
to update to 2023.11.6.
I ran ha network info and it showed both host_internet and
supervisor_internet set to true. I would include the entire output but got
too frustrated trying to figure out how to copy/paste the output from a
terminal window....what a mess that is.
I ran ha resolution info and got the following:
[core-ssh ~]$ ha resolution info
checks:
- enabled: true
slug: free_space
- enabled: true
slug: multiple_data_disks
- enabled: true
slug: addon_pwned
- enabled: true
slug: docker_config
- enabled: true
slug: network_interface_ipv4
- enabled: true
slug: core_security
- enabled: true
slug: dns_server_ipv6
- enabled: true
slug: supervisor_trust
- enabled: true
slug: backups
- enabled: true
slug: dns_server
issues:
- context: supervisor
reference: null
type: update_failed
uuid: 75b9b0551abe49858be284d00d36c5a7
suggestions: []
unhealthy:
- supervisor
unsupported: []
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXRDCT7PUJ7IFCAEPL23YG4NK7AVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZRG4ZTGMZUGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Sorry,. still cant update to 23.11.06 [281472299704896] Error updating Home Assistant Supervisor: Update of Supervisor failed: Can't install ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.6: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.6/json: Not Found ("No such image: ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.6") The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Ands from the Supervisor log: 23-11-29 21:43:25 ERROR (MainThread) [supervisor.docker.interface] Can't install ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.6: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.6/json: Not Found ("No such image: ghcr.io/home-assistant/aarch64-hassio-supervisor:2023.11.6") |
More Supervisor logs (with debug level log enabled and diagnostics enabled) 23-11-29 23:37:01 INFO (MainThread) [supervisor.supervisor] Repairing Supervisor 2023.10.1 |
@jensjakobandersen thanks for the logs. It really seems that the fetching operation runs for quite some time, but in the end things just go south. Can you setup host SSH access on port 22222 and run the following command from the HAOS shell?
|
So I was able to reproduce a problem with the same symptoms today after a power outage. I think these are the same problems underneath. Unfortunately there is no easy solution as this seems to be caused by data corruption. Please see #4738 (comment) for more details. |
I seem to have the same problem here getting the not found / no such image message. I tried to ignore the update for now, but each time I have to reboot, e.g. after installing a hacs repository, the raspberry does not come up again. I have to physically remove power to make it boot again. ha network info looks ok, host_internet and supervisor_internet is true ha resolution info shows
Running on raspberry 3 with sd card. Will try to reinstall from SSD ... |
I've tried it in Terminal couple of times addon. Had to disable Protection mode |
@agners I tried re-installing HA on my Yellow and now I can't even get it installed. It seems I'm getting similar/same errors as before when I tried updating. Below is the errors I see. So unfortunately now I have no HA that I can even get running at all. :( Are you sure nothing else changed with the 2023.11.x release?? It seems a bit too unusual that this many of us would mysteriously have corruption. |
Pretty strange. Can you install the 2023.10.x version ? Just for verifikation? |
Current state: each time i reboot, the raspberry does not come up again, I guess because supervisor update failed. Then I have to switch power off/on and it boots again. |
This has happened to me also, prior to this the Supervisor was reporting issues. Before this perfectly healthy and I haven't made any changes for quite some time now. |
@xgolubev in your cases, it definitely is that Docker storage issue described in #4738 (comment).
@GJSchroeder you mean in the Supervisor 2023.11.x release? Nothing really which stands out/would explain this issue. But I could miss something, I can't rule out that... We do have sooo many installations these days, it is quite often the case that several people independently run into the same problem, even when it involves power cuts. That you can observe it in a new installation is a bit strange. Did you remove power at any point when the LEDs were on? (same question to @jasperzz) |
@hubtub2 such type of behavior I've read from folks having a dying SD card. It seem that they become a bit flaky before they die entirely. Is that still on the SD card? |
Interesting .- there might be a solution: |
Solved my issue yesterday. Indeed a dying sd card. |
I solved it by using the script referenced in With one addition - I had to run the commandline "ha banner" to get home assistant started. Here is the result of my ha info command (after waiting quite a while for supervisor to update itself at startup): ha infoarch: aarch64
|
Yes, still SD card. Parts are here to update to SSD. I hope you are right, that this is just a statistical effect because of the large installed user based. If this is the case we should have such a thread with every future release ;-) |
As mentioned in #4738 (comment), HAOS 11.2 comes with a new Docker version which includes a fix for that particular type of data corruption. So I am expecting that the amount of people seeing this issues slowly declines as people update to HAOS 11.2. Let's see. |
@agners When I first installed HA on the Yellow, I had a lot of difficulty and following the normal procedure never worked. I had to do the RPIBoot method and eventually got it to work and all my automations etc worked fine after restoring the backup from my previous Raspberry Pi. I had the same issues when trying to re-install 2023.11.5 and got those errors but even with RPIBoot method it still failed. I then tried again to install with the 2023.10.5, and it fails part way thru the install. It seems to get mostly done, and I can see it on the network but when I try to connect to it I get a message saying "Preparing Home Assistant. This may up to 20 minutes." I've left it for hours and it never completes. I'm honestly disappointed with the installation process for Yellow -- trying to figure out problems just by watching blinking LEDs is painful at best. But the short answer is yes, it's very likely that during one of those attempts I removed power when the LEDs were on and maybe there's some corrupted files there that are causing the issue. Between each attempt, I restart the Yellow box with both the red and blue buttons pushed, which I thought was supposed to basically do a factory reset. But maybe it's not actually doing a clean wipe of everything (I'm using CM4 with I think 32GB storage, not the CM4 Lite). I'm away now but will try some more things when I return next week. If there's a different/better way to completely wipe clean the Yellow CM4 storage, I'm happy to try it. Worst case, I'll go back to my previous Raspberry Pi and see if I can get that updated to a newer build and if so, maybe will have to just ditch the Yellow (unfortunately). |
@GJSchroeder the rpi-boot method is definitely not the regular expierence people should go through. It is hard to tell why the regular procedure did not work out for you. I am sorry for the hassle 😢 Not sure when did those installations, but in October 25th there was an update to the installer (see https://github.com/NabuCasa/buildroot-installer/releases/tag/yellow-installer-20231025). At least on my end, that version of the installer worked flawlessly every time. If you have been able to update to HAOS 11.2, then the device wipe mechanism should be good enough (as it wipes the complete Docker storage). |
Update:
![]()
So the case is closed to me, but since it happened on a fresh install on a different SD card, it does not look like a coincidence to me. Hope my information helps. Maybe it is the old PI 3B? Or some network IPv4/v6 problem? |
@agners I spent several hours yesterday trying many, many times to get a clean install and it is still failing. I tried installing the latest HA OS 11.2 and still get the same error during install as noted earlier (and included below). I also tried using an older version 10.5 from August, and it fails in a different way in that it gets part way installed, I can see the homeassistant on the network, but when I try to connect it shows a message saying it's installing and it will take up to 20 minutes. I've left it that way overnight and it never progressed. So at this point, it seems I basically have a bricked HA Yellow that I can't even do a fresh install on. What I've tried doing typically is the following:
I'm using a CM4 with eMMC. I do also have an NVMe SSD but have not been using it in order to eliminate any other extra issues. Here is the log from the failed attempt at clean install of 11.2:
|
Could you try installing on the SSD to test that aspect?
ons. 13. dec. 2023 02.35 skrev GJSchroeder ***@***.***>:
… @agners <https://github.com/agners> I spent several hours yesterday
trying many, many times to get a clean install and it is still failing. I
tried installing the latest HA OS 11.2 and still get the same error during
install as noted earlier (and included below). I also tried using an older
version 10.5 from August, and it fails in a different way in that it gets
part way installed, I can see the homeassistant on the network, but when I
try to connect it shows a message saying it's installing and it will take
up to 20 minutes. I've left it that way overnight and it never progressed.
So at this point, it seems I basically have a bricked HA Yellow that I
can't even do a fresh install on.
What I've tried doing typically is the following:
1. Run RPIBoot. It seems to complete ok, although there is a message
in the output that says "Cannot open file fixup4.dat". I did a little
looking into that and seems that it can be ignored?
2. Use Raspberry Pi Imager to image HA Yellow onto the device. I've
tried just selecting the HA Yellow from the choices and assume it picks the
latest, as well as downloading the latest image from github and pointing
the imager to use that image, as well as downloading older versions and
trying those.
3. Restart Yellow
I'm using a CM4 with eMMC. I do also have an NVMe SSD but have not been
using it in order to eliminate any other extra issues.
Here is the log from the failed attempt at clean install of 11.2:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/udev.sh
[15:32:47] INFO: Using udev information from host
cont-init: info: /etc/cont-init.d/udev.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun supervisor (no readiness notification)
services-up: info: copying legacy longrun watchdog (no readiness notification)
[15:32:48] INFO: Starting local supervisor watchdog...
s6-rc: info: service legacy-services successfully started
23-12-12 15:32:51 INFO (MainThread) [__main__] Initializing Supervisor setup
23-12-12 15:32:51 INFO (MainThread) [supervisor.docker.network] Can't find Supervisor network, creating a new network
23-12-12 15:32:52 INFO (MainThread) [supervisor.bootstrap] Seting up coresys for machine: yellow
23-12-12 15:32:52 INFO (MainThread) [supervisor.docker.supervisor] Attaching to Supervisor ghcr.io/home-assistant/aarch64-hassio-supervisor with version 2023.11.6
23-12-12 15:32:52 INFO (MainThread) [supervisor.docker.supervisor] Connecting Supervisor to hassio-network
23-12-12 15:32:52 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state initialize
23-12-12 15:32:52 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
23-12-12 15:32:52 INFO (MainThread) [__main__] Setting up Supervisor
23-12-12 15:32:52 INFO (MainThread) [supervisor.api] Starting API on 172.30.32.2
23-12-12 15:32:52 INFO (MainThread) [supervisor.hardware.monitor] Started Supervisor hardware monitor
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.manager] Connected to system D-Bus.
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.agent] Load dbus interface io.hass.os
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.hostname] Load dbus interface org.freedesktop.hostname1
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.logind] Load dbus interface org.freedesktop.login1
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.network] Load dbus interface org.freedesktop.NetworkManager
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.rauc] Load dbus interface de.pengutronix.rauc
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.resolved] Load dbus interface org.freedesktop.resolve1
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.systemd] Load dbus interface org.freedesktop.systemd1
23-12-12 15:32:52 INFO (MainThread) [supervisor.dbus.timedate] Load dbus interface org.freedesktop.timedate1
23-12-12 15:32:53 INFO (MainThread) [supervisor.host.services] Updating service information
23-12-12 15:32:53 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information
23-12-12 15:32:53 WARNING (SyncWorker_1) [supervisor.host.sound] Can't update PulseAudio data: Failed to connect to pulseaudio server
23-12-12 15:32:53 INFO (MainThread) [supervisor.host.network] Updating local network information
23-12-12 15:32:53 INFO (MainThread) [supervisor.host.apparmor] Loading AppArmor Profiles: {'hassio-supervisor'}
23-12-12 22:32:53 INFO (MainThread) [supervisor.docker.monitor] Started docker events monitor
23-12-12 22:32:53 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json
23-12-12 <https://version.home-assistant.io/stable.json23-12-12> 22:32:53 INFO (MainThread) [supervisor.docker.interface] Found ghcr.io/home-assistant/aarch64-hassio-cli versions: [<AwesomeVersion CalVer '2023.10.0'>]
23-12-12 22:32:53 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/aarch64-hassio-cli with version 2023.10.0
23-12-12 22:32:53 INFO (MainThread) [supervisor.plugins.cli] Starting CLI plugin
23-12-12 22:32:54 INFO (MainThread) [supervisor.docker.cli] Starting CLI ghcr.io/home-assistant/aarch64-hassio-cli with version 2023.10.0 - 172.30.32.5
23-12-12 22:32:54 INFO (MainThread) [supervisor.docker.interface] Found ghcr.io/home-assistant/aarch64-hassio-dns versions: [<AwesomeVersion CalVer '2023.06.2'>]
23-12-12 22:32:54 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/aarch64-hassio-dns with version 2023.06.2
23-12-12 22:32:54 INFO (MainThread) [supervisor.plugins.dns] Starting CoreDNS plugin
23-12-12 22:32:55 INFO (MainThread) [supervisor.docker.dns] Starting DNS ghcr.io/home-assistant/aarch64-hassio-dns with version 2023.06.2 - 172.30.32.3
23-12-12 22:32:55 INFO (MainThread) [supervisor.plugins.dns] Updated /etc/resolv.conf
23-12-12 22:32:55 INFO (MainThread) [supervisor.docker.interface] Found ghcr.io/home-assistant/aarch64-hassio-audio versions: [<AwesomeVersion CalVer '2023.10.0'>]
23-12-12 22:32:55 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/aarch64-hassio-audio with version 2023.10.0
23-12-12 22:32:55 INFO (MainThread) [supervisor.plugins.audio] Starting Audio plugin
23-12-12 22:32:56 INFO (MainThread) [supervisor.docker.audio] Starting Audio ghcr.io/home-assistant/aarch64-hassio-audio with version 2023.10.0 - 172.30.32.4
23-12-12 22:32:56 INFO (MainThread) [supervisor.docker.interface] Found ghcr.io/home-assistant/aarch64-hassio-observer versions: [<AwesomeVersion CalVer '2023.06.0'>]
23-12-12 22:32:56 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/aarch64-hassio-observer with version 2023.06.0
23-12-12 22:32:56 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin
23-12-12 22:32:57 INFO (MainThread) [supervisor.docker.observer] Starting Observer ghcr.io/home-assistant/aarch64-hassio-observer with version 2023.06.0 - 172.30.32.6
23-12-12 22:32:57 INFO (MainThread) [supervisor.docker.interface] Found ghcr.io/home-assistant/aarch64-hassio-multicast versions: [<AwesomeVersion CalVer '2023.06.2'>]
23-12-12 22:32:57 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/aarch64-hassio-multicast with version 2023.06.2
23-12-12 22:32:57 INFO (MainThread) [supervisor.plugins.multicast] Starting Multicast plugin
23-12-12 22:32:58 INFO (MainThread) [supervisor.docker.multicast] Starting Multicast ghcr.io/home-assistant/aarch64-hassio-multicast with version 2023.06.2 - Host
23-12-12 22:32:58 INFO (MainThread) [supervisor.plugins.manager] cli does not have the latest version 2023.11.0, updating
23-12-12 22:32:58 INFO (MainThread) [supervisor.docker.interface] Updating image ghcr.io/home-assistant/aarch64-hassio-cli:2023.10.0 to ghcr.io/home-assistant/aarch64-hassio-cli:2023.11.0
23-12-12 <http://ghcr.io/home-assistant/aarch64-hassio-cli:2023.11.023-12-12> 22:32:58 INFO (MainThread) [supervisor.docker.interface] Downloading docker image ghcr.io/home-assistant/aarch64-hassio-cli with tag 2023.11.0.
23-12-12 22:33:53 ERROR (MainThread) [supervisor.docker.interface] Can't install ghcr.io/home-assistant/aarch64-hassio-cli:2023.11.0: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/aarch64-hassio-cli:2023.11.0/json: Not Found ("No such image: ghcr.io/home-assistant/aarch64-hassio-cli:2023.11.0")
23-12-12 22:33:53 ERROR (MainThread) [supervisor.plugins.cli] CLI update failed
23-12-12 22:33:53 ERROR (MainThread) [supervisor.plugins.manager] Can't update cli to 2023.11.0, the Supervisor healthy could be compromised!
23-12-12 22:33:53 INFO (MainThread) [supervisor.resolution.module] Create new suggestion execute_update - plugin / cli
23-12-12 22:33:53 INFO (MainThread) [supervisor.resolution.module] Create new issue update_failed - plugin / cli
23-12-12 22:33:53 INFO (MainThread) [supervisor.homeassistant.secrets] Loaded 0 Home Assistant secrets
23-12-12 22:33:53 INFO (MainThread) [supervisor.docker.interface] No version found for ghcr.io/home-assistant/yellow-homeassistant
23-12-12 <http://ghcr.io/home-assistant/yellow-homeassistant23-12-12> 22:33:53 INFO (MainThread) [supervisor.homeassistant.core] No Home Assistant Docker image ghcr.io/home-assistant/yellow-homeassistant found.
23-12-12 22:33:53 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/yellow-homeassistant with version landingpage
23-12-12 22:33:53 INFO (MainThread) [supervisor.homeassistant.core] Using preinstalled landingpage
23-12-12 22:33:53 INFO (MainThread) [supervisor.homeassistant.core] Starting HomeAssistant landingpage
23-12-12 22:33:53 INFO (MainThread) [supervisor.homeassistant.module] Update pulse/client.config: /data/tmp/homeassistant_pulse
23-12-12 22:33:54 INFO (MainThread) [supervisor.docker.homeassistant] Starting Home Assistant ghcr.io/home-assistant/yellow-homeassistant with version landingpage
23-12-12 22:33:54 INFO (MainThread) [supervisor.os.manager] Detect Home Assistant Operating System 11.2 / BootSlot A
23-12-12 22:33:54 INFO (MainThread) [supervisor.store.git] Cloning add-on https://github.com/home-assistant/addons repository
23-12-12 22:33:54 INFO (MainThread) [supervisor.store.git] Cloning add-on https://github.com/esphome/home-assistant-addon repository
23-12-12 22:33:54 INFO (MainThread) [supervisor.store.git] Cloning add-on https://github.com/hassio-addons/repository repository
23-12-12 22:33:55 ERROR (MainThread) [supervisor.store.git] Can't clone https://github.com/hassio-addons/repository repository: Cmd('git') failed due to: exit code(128)
cmdline: git clone -v --recursive --depth=1 --shallow-submodules -- https://github.com/hassio-addons/repository /data/addons/git/a0d7b954
stderr: 'Cloning into '/data/addons/git/a0d7b954'...
POST git-upload-pack (175 bytes)
POST git-upload-pack (244 bytes)
error: RPC failed; curl 56 OpenSSL SSL_read: OpenSSL/3.1.4: error:0A000119:SSL routines::decryption failed or bad record mac, errno 0
error: 1187 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
'.
23-12-12 22:33:55 ERROR (MainThread) [supervisor.store] Can't retrieve data from https://github.com/hassio-addons/repository due to
23-12-12 22:33:55 INFO (MainThread) [supervisor.resolution.module] Create new suggestion execute_remove - store / a0d7b954
23-12-12 22:33:55 INFO (MainThread) [supervisor.resolution.module] Create new issue fatal_error - store / a0d7b954
23-12-12 22:33:56 INFO (MainThread) [supervisor.store] Loading add-ons from store: 31 all - 31 new - 0 remove
23-12-12 22:33:56 INFO (MainThread) [supervisor.addons] Found 0 installed add-ons
23-12-12 22:33:56 INFO (MainThread) [supervisor.backups.manager] Found 0 backup files
23-12-12 22:33:56 INFO (MainThread) [supervisor.discovery] Loaded 0 messages
23-12-12 22:33:56 INFO (MainThread) [supervisor.ingress] Loaded 0 ingress sessions
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state setup
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.check] System checks complete
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state setup
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
23-12-12 22:33:56 INFO (MainThread) [supervisor.jobs] 'ResolutionFixup.run_autofix' blocked from execution, system is not running - setup
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state setup
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
23-12-12 22:33:56 INFO (MainThread) [__main__] Running Supervisor
23-12-12 22:33:56 INFO (MainThread) [supervisor.os.manager] Rauc: A - marked slot kernel.0 as good
23-12-12 22:33:56 INFO (MainThread) [supervisor.addons] Phase 'initialize' starting 0 add-ons
23-12-12 22:33:56 INFO (MainThread) [supervisor.addons] Phase 'system' starting 0 add-ons
23-12-12 22:33:56 INFO (MainThread) [supervisor.addons] Phase 'services' starting 0 add-ons
23-12-12 22:33:56 INFO (MainThread) [supervisor.core] Skipping start of Home Assistant
23-12-12 22:33:56 INFO (MainThread) [supervisor.addons] Phase 'application' starting 0 add-ons
23-12-12 22:33:56 INFO (MainThread) [supervisor.misc.tasks] All core tasks are scheduled
23-12-12 22:33:56 INFO (MainThread) [supervisor.core] Supervisor is up and running
23-12-12 22:33:56 INFO (MainThread) [supervisor.homeassistant.core] Home Assistant setup
23-12-12 22:33:56 INFO (MainThread) [supervisor.docker.interface] Updating image ghcr.io/home-assistant/yellow-homeassistant:landingpage to ghcr.io/home-assistant/yellow-homeassistant:2023.12.1
23-12-12 <http://ghcr.io/home-assistant/yellow-homeassistant:2023.12.123-12-12> 22:33:56 INFO (MainThread) [supervisor.docker.interface] Downloading docker image ghcr.io/home-assistant/yellow-homeassistant with tag 2023.12.1.
23-12-12 22:33:56 INFO (MainThread) [supervisor.host.info] Updating local host information
23-12-12 22:33:56 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json
23-12-12 <https://version.home-assistant.io/stable.json23-12-12> 22:33:56 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state running
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for free_space/system
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for multiple_data_disks/system
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for ipv4_connection_problem/system
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for trust/supervisor
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_ipv6_error/dns_server
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for pwned/addon
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for docker_config/system
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for no_current_backup/system
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.module] Create new suggestion create_full_backup - system / None
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.module] Create new issue no_current_backup - system / None
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_failed/dns_server
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.checks.base] Run check for security/core
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.check] System checks complete
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state running
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.fixup] Starting system autofix at state running
23-12-12 22:33:56 INFO (MainThread) [supervisor.resolution.fixup] System autofix complete
23-12-12 22:33:57 INFO (MainThread) [supervisor.host.services] Updating service information
23-12-12 22:33:57 INFO (MainThread) [supervisor.host.network] Updating local network information
23-12-12 22:33:57 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information
23-12-12 22:33:57 WARNING (SyncWorker_0) [supervisor.host.sound] Can't update PulseAudio data: Failed to connect to pulseaudio server
23-12-12 22:33:57 INFO (MainThread) [supervisor.host.manager] Host information reload completed
23-12-12 22:36:49 ERROR (MainThread) [supervisor.docker.interface] Can't install ghcr.io/home-assistant/yellow-homeassistant:2023.12.1: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/yellow-homeassistant:2023.12.1/json: Not Found ("No such image: ghcr.io/home-assistant/yellow-homeassistant:2023.12.1")
23-12-12 22:36:49 WARNING (MainThread) [supervisor.homeassistant.core] Error on Home Assistant installation. Retry in 30sec
23-12-12 22:37:19 INFO (MainThread) [supervisor.docker.interface] Updating image ghcr.io/home-assistant/yellow-homeassistant:landingpage to ghcr.io/home-assistant/yellow-homeassistant:2023.12.1
23-12-12 <http://ghcr.io/home-assistant/yellow-homeassistant:2023.12.123-12-12> 22:37:19 INFO (MainThread) [supervisor.docker.interface] Downloading docker image ghcr.io/home-assistant/yellow-homeassistant with tag 2023.12.1.
23-12-12 22:39:49 ERROR (MainThread) [supervisor.docker.interface] Can't install ghcr.io/home-assistant/yellow-homeassistant:2023.12.1: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/yellow-homeassistant:2023.12.1/json: Not Found ("No such image: ghcr.io/home-assistant/yellow-homeassistant:2023.12.1")
23-12-12 22:39:49 WARNING (MainThread) [supervisor.homeassistant.core] Error on Home Assistant installation. Retry in 30sec
23-12-12 22:40:19 INFO (MainThread) [supervisor.docker.interface] Updating image ghcr.io/home-assistant/yellow-homeassistant:landingpage to ghcr.io/home-assistant/yellow-homeassistant:2023.12.1
23-12-12 <http://ghcr.io/home-assistant/yellow-homeassistant:2023.12.123-12-12> 22:40:19 INFO (MainThread) [supervisor.docker.interface] Downloading docker image ghcr.io/home-assistant/yellow-homeassistant with tag 2023.12.1.
23-12-12 22:43:06 ERROR (MainThread) [supervisor.docker.interface] Can't install ghcr.io/home-assistant/yellow-homeassistant:2023.12.1: 404 Client Error for http+docker://localhost/v1.43/images/ghcr.io/home-assistant/yellow-homeassistant:2023.12.1/json: Not Found ("No such image: ghcr.io/home-assistant/yellow-homeassistant:2023.12.1")
23-12-12 22:43:06 WARNING (MainThread) [supervisor.homeassistant.core] Error on Home Assistant installation. Retry in 30sec
23-12-12 22:43:36 INFO (MainThread) [supervisor.docker.interface] Updating image ghcr.io/home-assistant/yellow-homeassistant:landingpage to ghcr.io/home-assistant/yellow-homeassistant:2023.12.1
23-12-12 <http://ghcr.io/home-assistant/yellow-homeassistant:2023.12.123-12-12> 22:43:36 INFO (MainThread) [supervisor.docker.interface] Downloading docker image ghcr.io/home-assistant/yellow-homeassistant with tag 2023.12.1.
—
Reply to this email directly, view it on GitHub
<#4709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYXXRC42ABUJ3WKJWDR4ITYJEA63AVCNFSM6AAAAAA7LMUAUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJTGEZDOMBTGI>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
@GJSchroeder it seems like any container update fails (also the CLI update) 🤔 Can you connect via USB-C to see the serial console (see also https://yellow.home-assistant.io/guides/use-serial-console-windows/ or https://yellow.home-assistant.io/guides/use-serial-console-linux-macos/)? What I would be interested is what happens when you manually pull using Do you have the NVMe SSD installed at this point? |
@agners I tried USB connection as described in your link, installed Putty, connected my Windows laptop to Yellow (with jumpers set appropriately), but in Device Manager I never see the "Ports" entry. So I couldn't get past that step. The guide says USB-C on Yellow to USB-A on computer, but my laptop only has USB-C. I can't imagine that would make a difference though. UPDATE: I also tried to install on the SSD but could not get that to work. I tried the method of imaging the eMMC and during startup pushing the blue button which should cause it to install on the SSD, but nothing really happened. Also tried imaging on the SSD (which it was in a separate case outside of the Yellow box) but can't get Yellow to boot from the NVMe SSD. Finally got so frustrated that I gave up and went back to my Raspberry Pi. Here's where it gets really weird. I restarted my original Pi, which was running 2023.10.5. When I first started it, I got the list of updates, so the first one I tried to apply was HA Supervisor 11.2. It failed in the same way that it was failing when I had the Yellow running a few weeks ago! I restarted it and tried a few more times and it continued to fail. Finally I realized that before I originally tried to set up the Yellow several weeks ago, I replaced my network switch with one that supported POE because that was the Yellow kit that I had gotten. I had also noticed reading quite a few other posts on the various forums of other people having very similar problems with Yellow as I did, and it seemed like most of them were using the POE version. So I replaced the POE switch with my old non-POE one switch, restarted HA, and Eureka! All of the Supervisor, Core and OS updates applied fine. So my theory is that the POE Yellow, or the POE switch, has some weird network issue that causes the installation/updates of HA to fail. To test this, I will go buy a regular power adaptor and try powering the Yellow with that and use a non-POE ethernet cable and my non-POE switch. It sounds like a long shot, but that's my last try. If that's not the issue, then I give up on Yellow and will stick with my Raspberry Pi, which is running fine now. |
@agners A follow-up to my previous post. I tried using a regular power cable and non-POE switch on the Yellow. I did a reset (turn it on while pressing red button), then restarted with an image on USB SD card. Shockingly, it installed perfectly. That's the first time it's worked that way in dozens and dozens of attempts that I've done over the past 6 weeks when I was using POE ethernet with POE switch. Just to make sure it wasn't a random success, I reset it and then did the RPIBoot, write image to the Yellow and restart to install. Again it worked perfectly. After that I was able to restore a previous backup and all my devices, automations etc are working fine. I then saw that I was running 2023.12.1 and that 12.2 was available. I attempted the update and it succeeded. That was the first time I was able to apply an update since my first install of the Yellow 6 weeks ago. I then re-connected the Yellow to the POE cable and the POE switch and it's all still running fine. I guess the next test will be whether I can update to the next HA version using that POE switch. I'm not a network guru, but it does seem like there's some problem with either the Yellow POE, or the POE switch that is causing certain activities like installation and updates to fail. Very, very strange. For reference, my switch is TP-Link 5-port 10/100Mbps model TL-SF1005LP. But for now, I finally have the Yellow up and running again so I'm happy. Thank you for the help. |
Describe the issue you are experiencing
Update to 2023.11.3 fails
What type of installation are you running?
Home Assistant OS
Which operating system are you running on?
Home Assistant Operating System
Steps to reproduce the issue
Anything in the Supervisor logs that might be useful for us?
System Health information
System Information
Home Assistant Community Store
Home Assistant Cloud
Home Assistant Supervisor
Dashboards
Recorder
Sonoff
Supervisor diagnostics
config_entry-hassio-6854af4adda5d85d74a459955abf37b0.json.txt
Additional information
Has also filed bug for 2023.11.1 here:
#4703
The text was updated successfully, but these errors were encountered: