I'm using 9.1.2 for now and i want to upgarde to 10
Docker version 19.03.5, bui…ld 633a0ea
docker-compose version 1.25.4, build 8d51620a
```bash
git checkout master
git pull
SENTRY_IMAGE=getsentry/sentry:10 ./install.sh
```
I just set postgres password in `docker-compose.yml` and `sentry.conf.py`, The other config is default
Install log:
```
Checking minimum requirements...
Removing network onpremise_default
Network onpremise_default not found.
Removing network sentry_onpremise_default
Network sentry_onpremise_default not found.
Creating volumes for persistent storage...
Created sentry-data.
Created sentry-postgres.
Created sentry-redis.
Created sentry-zookeeper.
Created sentry-kafka.
Created sentry-clickhouse.
Created sentry-symbolicator.
sentry/sentry.conf.py already exists, skipped creation.
sentry/config.yml already exists, skipped creation.
sentry/requirements.txt already exists, skipped creation.
Building and tagging Docker images...
Pulling smtp ...
Pulling memcached ...
Pulling redis ...
Pulling postgres ...
Pulling zookeeper ...
Pulling kafka ...
Pulling clickhouse ...
Pulling snuba-api ...
Pulling snuba-consumer ...
Pulling snuba-replacer ...
Pulling snuba-cleanup ...
Pulling symbolicator ...
Pulling symbolicator-cleanup ...
Pulling web ...
Pulling cron ...
Pulling worker ...
Pulling post-process-forwarder ...
Pulling sentry-cleanup ...
Pulling memcached ... pulling from library/memcached
Pulling memcached ... digest: sha256:6627e971255440a1bd...
Pulling memcached ... status: image is up to date for m...
Pulling memcached ... done
Pulling redis ... pulling from library/redis
Pulling redis ... digest: sha256:cb9783b1c39bb34f8d...
Pulling redis ... status: image is up to date for r...
Pulling redis ... done
Pulling postgres ... pulling from library/postgres
Pulling postgres ... digest: sha256:92042d6c1c79d2a488...
Pulling postgres ... status: image is up to date for p...
Pulling postgres ... done
Pulling snuba-replacer ... pulling from getsentry/snuba
Pulling snuba-replacer ... digest: sha256:4ee01862279967f14f...
Pulling snuba-replacer ... status: image is up to date for g...
Pulling snuba-replacer ... done
Pulling symbolicator-cleanup ... done
Pulling worker ... done
Pulling post-process-forwarder ... done
Pulling zookeeper ... pulling from confluentinc/cp-zook...
Pulling zookeeper ... digest: sha256:c63871c17b038e1685...
Pulling zookeeper ... status: image is up to date for c...
Pulling zookeeper ... done
Pulling web ... done
Pulling cron ... done
Pulling smtp ... pulling from tianon/exim4
Pulling smtp ... digest: sha256:f01923e8154add4b30...
Pulling smtp ... status: image is up to date for t...
Pulling smtp ... done
Pulling clickhouse ... pulling from yandex/clickhouse-se...
Pulling clickhouse ... digest: sha256:9cda4acf02b112f7c1...
Pulling clickhouse ... status: image is up to date for y...
Pulling clickhouse ... done
Pulling snuba-consumer ... pulling from getsentry/snuba
Pulling snuba-consumer ... digest: sha256:4ee01862279967f14f...
Pulling snuba-consumer ... status: image is up to date for g...
Pulling snuba-consumer ... done
Pulling symbolicator ... pulling from getsentry/symbolicator
Pulling symbolicator ... digest: sha256:9c6433456752544a33...
Pulling symbolicator ... status: image is up to date for g...
Pulling symbolicator ... done
Pulling snuba-api ... pulling from getsentry/snuba
Pulling snuba-api ... digest: sha256:4ee01862279967f14f...
Pulling snuba-api ... status: image is up to date for g...
Pulling snuba-api ... done
Pulling kafka ... pulling from confluentinc/cp-kafka
Pulling kafka ... digest: sha256:c5ff41b494329e9dea...
Pulling kafka ... status: image is up to date for c...
Pulling kafka ... done
Pulling sentry-cleanup ... done
Pulling snuba-cleanup ... done
Some service image(s) must be built from source by running:
docker-compose build symbolicator-cleanup worker post-process-forwarder web cron sentry-cleanup snuba-cleanup
10: Pulling from getsentry/sentry
Digest: sha256:65d0c267c5550a5cc9963aefeb3f8e2c5d0b273978c2583d9041950b6a860c9b
Status: Image is up to date for getsentry/sentry:10
docker.io/getsentry/sentry:10
Building web
Step 1/4 : ARG SENTRY_IMAGE
Step 2/4 : FROM ${SENTRY_IMAGE:-getsentry/sentry:latest}
---> 6b45bcae20ee
Step 3/4 : COPY . /usr/src/sentry
---> Using cache
---> 5f74bed159d7
Step 4/4 : RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
---> Using cache
---> ce4c952a7893
Successfully built ce4c952a7893
Successfully tagged sentry-onpremise-local:latest
smtp uses an image, skipping
memcached uses an image, skipping
redis uses an image, skipping
postgres uses an image, skipping
zookeeper uses an image, skipping
kafka uses an image, skipping
clickhouse uses an image, skipping
snuba-api uses an image, skipping
snuba-consumer uses an image, skipping
snuba-replacer uses an image, skipping
symbolicator uses an image, skipping
Building snuba-cleanup ...
Building symbolicator-cleanup ...
Building web ...
Building cron ...
Building worker ...
Building post-process-forwarder ...
Building sentry-cleanup ...
Building snuba-cleanup
Building web
Building sentry-cleanup
Building worker
Building symbolicator-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
---> ce4c952a7893
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron && rm -r /var/lib/apt/lists/*
---> Using cache
---> a1485bf01aee
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
---> Using cache
---> 2b9a538e3286
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
---> Using cache
---> d9e7964ef36c
Successfully built d9e7964ef36c
Successfully tagged sentry-cleanup-onpremise-local:latest
Building sentry-cleanup ... done
Building post-process-forwarder
Step 1/4 : ARG SENTRY_IMAGE
Step 2/4 : FROM ${SENTRY_IMAGE:-getsentry/sentry:latest}
---> 6b45bcae20ee
Step 3/4 : COPY . /usr/src/sentry
---> Using cache
---> 5f74bed159d7
Step 4/4 : RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
---> ecc4a1950513
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron && rm -r /var/lib/apt/lists/*
---> Using cache
---> 1abfb2cf8657
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
---> Using cache
---> 6712ba3c428e
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
---> Using cache
---> ce4c952a7893
Successfully built ce4c952a7893
---> Using cache
---> de402f653341
Successfully built de402f653341
Successfully tagged sentry-onpremise-local:latest
Building web ... done
Building cron
Successfully tagged symbolicator-cleanup-onpremise-local:latest
Building symbolicator-cleanup ... done
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
---> 54a4a86d7c92
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron && rm -r /var/lib/apt/lists/*
---> Using cache
---> 382bdc67fd24
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
---> Using cache
---> 2c905a45ccbb
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
Step 1/4 : ARG SENTRY_IMAGE
Step 2/4 : FROM ${SENTRY_IMAGE:-getsentry/sentry:latest}
---> Using cache
---> 96a3a5a19798
Successfully built 96a3a5a19798
---> 6b45bcae20ee
Step 3/4 : COPY . /usr/src/sentry
---> Using cache
---> 5f74bed159d7
Step 4/4 : RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
---> Using cache
---> ce4c952a7893
Successfully tagged snuba-cleanup-onpremise-local:latest
Successfully built ce4c952a7893
Building snuba-cleanup ... done
Successfully tagged sentry-onpremise-local:latest
Building worker ... done
Step 1/4 : ARG SENTRY_IMAGE
Step 2/4 : FROM ${SENTRY_IMAGE:-getsentry/sentry:latest}
---> 6b45bcae20ee
Step 3/4 : COPY . /usr/src/sentry
---> Using cache
---> 5f74bed159d7
Step 4/4 : RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
---> Using cache
---> ce4c952a7893
Successfully built ce4c952a7893
Successfully tagged sentry-onpremise-local:latest
Building post-process-forwarder ... done
Step 1/4 : ARG SENTRY_IMAGE
Step 2/4 : FROM ${SENTRY_IMAGE:-getsentry/sentry:latest}
---> 6b45bcae20ee
Step 3/4 : COPY . /usr/src/sentry
---> Using cache
---> 5f74bed159d7
Step 4/4 : RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
---> Using cache
---> ce4c952a7893
Successfully built ce4c952a7893
Successfully tagged sentry-onpremise-local:latest
Building cron ... done
Docker images built.
Bootstrapping Snuba...
Creating network "sentry_onpremise_default" with the default driver
Creating volume "sentry_onpremise_sentry-secrets" with default driver
Creating volume "sentry_onpremise_sentry-smtp" with default driver
Creating volume "sentry_onpremise_sentry-zookeeper-log" with default driver
Creating volume "sentry_onpremise_sentry-kafka-log" with default driver
Creating volume "sentry_onpremise_sentry-smtp-log" with default driver
Creating sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_redis_1 ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_redis_1 ... done
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_zookeeper_1 ... done
Creating sentry_onpremise_kafka_1 ...
Creating sentry_onpremise_kafka_1 ... done
+ '[' b = - ']'
+ snuba bootstrap --help
+ set -- snuba bootstrap --force
+ set gosu snuba snuba bootstrap --force
+ exec gosu snuba snuba bootstrap --force
2020-03-13 18:36:38,245 Connection to Kafka failed (attempt 0)
Traceback (most recent call last):
File "/usr/src/snuba/snuba/cli/bootstrap.py", line 55, in bootstrap
client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
2020-03-13 18:36:40,249 Connection to Kafka failed (attempt 1)
Traceback (most recent call last):
File "/usr/src/snuba/snuba/cli/bootstrap.py", line 55, in bootstrap
client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
2020-03-13 18:36:41,631 Topic events created
2020-03-13 18:36:41,631 Topic event-replacements created
2020-03-13 18:36:41,631 Topic snuba-commit-log created
2020-03-13 18:36:41,632 Topic cdc created
2020-03-13 18:36:41,632 Topic errors-replacements created
2020-03-13 18:36:41,632 Topic outcomes created
2020-03-13 18:36:41,632 Topic ingest-sessions created
2020-03-13 18:36:41,697 Tables for dataset events created.
2020-03-13 18:36:41,701 Tables for dataset groupassignee created.
2020-03-13 18:36:41,706 Tables for dataset outcomes_raw created.
2020-03-13 18:36:41,720 Tables for dataset events_migration created.
2020-03-13 18:36:41,733 Tables for dataset outcomes created.
2020-03-13 18:36:41,756 Tables for dataset sessions created.
2020-03-13 18:36:41,760 Tables for dataset groupedmessage created.
2020-03-13 18:36:41,768 Tables for dataset transactions created.
2020-03-13 18:36:41,768 Tables for dataset discover created.
Starting sentry_onpremise_redis_1 ...
Starting sentry_onpremise_redis_1 ... done
Starting sentry_onpremise_clickhouse_1 ...
Starting sentry_onpremise_clickhouse_1 ... done
Starting sentry_onpremise_zookeeper_1 ...
Starting sentry_onpremise_zookeeper_1 ... done
Starting sentry_onpremise_kafka_1 ...
Starting sentry_onpremise_kafka_1 ... done
+ '[' m = - ']'
+ snuba migrate --help
+ set -- snuba migrate
+ set gosu snuba snuba migrate
+ exec gosu snuba snuba migrate
2020-03-13 18:36:45,718 Migrating dataset outcomes
2020-03-13 18:36:45,740 Migrating dataset outcomes_raw
2020-03-13 18:36:45,747 Migrating dataset groupedmessage
2020-03-13 18:36:45,759 Migrating dataset sessions
2020-03-13 18:36:45,774 Migrating dataset discover
2020-03-13 18:36:45,775 Migrating dataset transactions
2020-03-13 18:36:45,797 Migrating dataset events_migration
2020-03-13 18:36:45,830 Migrating dataset events
2020-03-13 18:36:45,869 Migrating dataset groupassignee
Error: No such volume: sentry-postgres-new
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/9.6/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ...
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/9.6/data -l logfile start
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for reg* system OID user data types ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for roles starting with 'pg_' ok
Creating dump of global objects ok
Creating dump of database schemas
postgres
template1
ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.
Performing Upgrade
------------------
Analyzing all rows in the new cluster ok
Freezing all rows on the new cluster ok
Deleting files from new pg_clog ok
Copying old pg_clog to new server ok
Setting next transaction ID and epoch for new cluster ok
Deleting files from new pg_multixact/offsets ok
Copying old pg_multixact/offsets to new server ok
Deleting files from new pg_multixact/members ok
Copying old pg_multixact/members to new server ok
Setting next multixact ID and offset for new cluster ok
Resetting WAL archives ok
Setting frozenxid and minmxid counters in new cluster ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
postgres
template1
ok
Copying user relation files
/var/lib/postgresql/9.5/data/base/12379/2613
/var/lib/postgresql/9.5/data/base/12379/2683
/var/lib/postgresql/9.5/data/base/12379/2995
/......
ok
Setting next OID for new cluster ok
Sync data directory to disk ok
Creating script to analyze new cluster ok
Creating script to delete old cluster ok
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
./analyze_new_cluster.sh
Running this script will delete the old cluster's data files:
./delete_old_cluster.sh
sentry-postgres
sentry-postgres
'./pg_replslot' -> '/to/./pg_replslot'
'./base/12407/20023' -> '/to/./base/12407/20023'
'./base/12407/19203' -> '/to/./base/12407/19203'
'./base/12407/17745' -> '/to/./base/12407/17745'
'......'
sentry-postgres-new
Setting up database...
Starting sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_memcached_1 ...
Starting sentry_onpremise_zookeeper_1 ... done
Creating sentry_onpremise_postgres_1 ...
Starting sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_smtp_1 ...
Creating sentry_onpremise_symbolicator_1 ...
Starting sentry_onpremise_redis_1 ...
Starting sentry_onpremise_clickhouse_1 ... done
Starting sentry_onpremise_redis_1 ... done
Starting sentry_onpremise_kafka_1 ...
Starting sentry_onpremise_kafka_1 ... done
Creating sentry_onpremise_snuba-replacer_1 ...
Creating sentry_onpremise_snuba-api_1 ...
Creating sentry_onpremise_snuba-consumer_1 ...
Creating sentry_onpremise_snuba-replacer_1 ... done
Creating sentry_onpremise_symbolicator_1 ... done
Creating sentry_onpremise_postgres_1 ... done
Creating sentry_onpremise_memcached_1 ... done
Creating sentry_onpremise_snuba-api_1 ... done
Creating sentry_onpremise_snuba-consumer_1 ... done
Creating sentry_onpremise_smtp_1 ... done
upgrade: 1: exec: /etc/sentry/docker-entrypoint.sh: not found
Cleaning up...
```
How to fix it?