Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

geth传数据到kafka中endpoint问题 #3

Open
xingyushu opened this issue Oct 21, 2019 · 3 comments
Open

geth传数据到kafka中endpoint问题 #3

xingyushu opened this issue Oct 21, 2019 · 3 comments

Comments

@xingyushu
Copy link

Geth Kafka Go
... 2.1.1 1.12.7

我是在docker里运行的kafka,然后配置的是9092端口

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka
    depends_on: [ zookeeper ]
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.18.129
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    volumes:
      - /data/product/zj_bigdata/data/kafka/docker.sock:/var/run/docker.sock

我是这样设置的参数,将eth的区块节点数据发送到kafka

geth --datadir ./chaindata/ --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3 --cache 100 --rpc --rpccorsdomain --kafka.endpoint "http://localhost:9092"

提示参数配置错误:
invalid command: "url=http://localhost:9092"

我应该怎么配置kafka的参数才能发送成功到kafka中去呢

省略后面自定义的url能同步成功吗,怎么验证

@Sallery-X
Copy link
Collaborator

默认的是http://localhost:8082/topics/etc,confluent 的默认端口是8082
如果想缺省url,不要加kafka配置参数,例如geth --datadir ./chaindata/ --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3 --cache 100 --rpc --rpccorsdomain
但是这种情况下,全节点数据不会通过confluent同步到kafka集群,只是跑了一个etc全节点来同步数据

@xingyushu
Copy link
Author

默认的是http://localhost:8082/topics/etc,confluent 的默认端口是8082
如果想缺省url,不要加kafka配置参数,例如geth --datadir ./chaindata/ --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3 --cache 100 --rpc --rpccorsdomain
但是这种情况下,全节点数据不会通过confluent同步到kafka集群,只是跑了一个etc全节点来同步数据

我的命令是:
geth --datadir ./chaindata/ --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3 --cache 100 --rpc --rpccorsdomain --kafka.endpoint="http://localhost:9092/topics/etc"
是不是先需要创建name为etc的topic,然后同步数据,我运行命令显示同步区块是成功的,那怎么验证我的数据已经发送到kafka中呢

@Sallery-X
Copy link
Collaborator

创建topic不归你管,confluent会直接用你url中的topic,验证数据是否发送到kafka中,kafka的命令可以消费一下看看是否能消费到,confluent也会有日志

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants