- Login to Prod b/c node N1
- Export b/c data via “geth export”
- Archive keystore directory
- Stop all Dev b/c nodes
- Login to Dev b/c node N1
- Import b/c data via “geth import”
- Replace dev keystore directory with keystore directory from Prod b/c node N1 archive
- Restart QA/Dev b/c node N1
- Restart other QA/Dev b/c nodes
// echo "log$(date +'%m%d%y')"
ps aux | grep geth
kill [PID] // use process ID obtained from the above (in our experiments "kill -9 [PID]" also results in a graceful stop but is not necessary, since usual "kill" also works for geth process)
ps aux | grep geth
check that geth is not running. If it's running you might want to "pkill screen" and repeat the above step.
geth --datadir data export backups/p01chain$(date +%y%m%d)
NOTE: if running from Cron, you might need to escape characters like this "geth --datadir data export backups/p01chain$(date +'%y%m%d')"
(launch screen and then launch "sh starter.sh" then Ctrl+A and "d" to detach)
tar -czvf backups/p01chain$(date +%y%m%d).gz backups/p01chain$(date +%y%m%d)
rm backups/p01chain$(date +%y%m%d)
tar -czvf backups/p01keys$(date +%y%m%d).gz data/keystore
scp -i mykey.pem backups/p01chain$(date +%y%m%d).gz ubuntu@my.ec2.id.amazonaws.com:/backupsImport/
ps aux | grep geth
kill [PID]
ps aux | grep geth
if the above still returns a new running geth process, then kill screen utility to stop it from restarting it:
pkill screen
geth --datadir data removedb
rm data/keystore/.
tar -xvzf backupsImport/p01chain210805.gz -C backupsImport
geth --datadir data import backupsImport/backups/p01chain210805
ls data/keystore
tar -xvzf backupsImport/p01keys210805.gz -C .
geth --datadir data init dppprod.json
geth --datadir data import backupsImport/backups/p01chain210805
rm backupsImport/backups/p01chain210805