這篇文章給大家介紹 基于Flume+Kafka+Spark-Streaming的實(shí)時(shí)流式處理過程是怎樣的,內(nèi)容非常詳細(xì),感興趣的小伙伴們可以參考借鑒,希望對(duì)大家能有所幫助。
創(chuàng)新互聯(lián)公司專業(yè)為企業(yè)提供阿合奇網(wǎng)站建設(shè)、阿合奇做網(wǎng)站、阿合奇網(wǎng)站設(shè)計(jì)、阿合奇網(wǎng)站制作等企業(yè)網(wǎng)站建設(shè)、網(wǎng)頁設(shè)計(jì)與制作、阿合奇企業(yè)網(wǎng)站模板建站服務(wù),10年阿合奇做網(wǎng)站經(jīng)驗(yàn),不只是建網(wǎng)站,更提供有價(jià)值的思路和整體網(wǎng)絡(luò)服務(wù)。
基于Flume+Kafka+Spark-Streaming的實(shí)時(shí)流式處理完整流程
1、環(huán)境準(zhǔn)備,四臺(tái)測(cè)試服務(wù)器
spark集群三臺(tái),spark1,spark2,spark3
kafka集群三臺(tái),spark1,spark2,spark3
zookeeper集群三臺(tái),spark1,spark2,spark3
日志接收服務(wù)器, spark1
日志收集服務(wù)器,redis (這臺(tái)機(jī)器用來做redis開發(fā)的,現(xiàn)在用來做日志收集的測(cè)試,主機(jī)名就不改了)
日志收集流程:
日志收集服務(wù)器->日志接收服務(wù)器->kafka集群->spark集群處理
說明: 日志收集服務(wù)器,在實(shí)際生產(chǎn)中很有可能是應(yīng)用系統(tǒng)服務(wù)器,日志接收服務(wù)器為大數(shù)據(jù)服務(wù)器中一臺(tái),日志通過網(wǎng)絡(luò)傳輸?shù)饺罩窘邮辗?wù)器,再入集群處理。
因?yàn)?,生產(chǎn)環(huán)境中,往往網(wǎng)絡(luò)只是單向開放給某臺(tái)服務(wù)器的某個(gè)端口訪問的。
Flume版本: apache-flume-1.5.0-cdh6.4.9 ,該版本已經(jīng)較好地集成了對(duì)kafka的支持
2、日志收集服務(wù)器(匯總端)
配置flume動(dòng)態(tài)收集特定的日志,collect.conf 配置如下:
# Name the components on this agent a1.sources = tailsource-1 a1.sinks = remotesink a1.channels = memoryChnanel-1 # Describe/configure the source a1.sources.tailsource-1.type = exec a1.sources.tailsource-1.command = tail -F /opt/modules/tmpdata/logs/1.log a1.sources.tailsource-1.channels = memoryChnanel-1 # Describe the sink a1.sinks.k1.type = logger # Use a channel which buffers events in memory a1.channels.memoryChnanel-1.type = memory a1.channels.memoryChnanel-1.keep-alive = 10 a1.channels.memoryChnanel-1.capacity = 100000 a1.channels.memoryChnanel-1.transactionCapacity = 100000 # Bind the source and sink to the channel a1.sinks.remotesink.type = avro a1.sinks.remotesink.hostname = spark1 a1.sinks.remotesink.port = 666 a1.sinks.remotesink.channel = memoryChnanel-1
日志實(shí)時(shí)監(jiān)控日志后,通過網(wǎng)絡(luò)avro類型,傳輸?shù)絪park1服務(wù)器的666端口上
啟動(dòng)日志收集端腳本:
bin/flume-ng agent --conf conf --conf-file conf/collect.conf --name a1 -Dflume.root.logger=INFO,console
3、日志接收服務(wù)器
配置flume實(shí)時(shí)接收日志,collect.conf 配置如下:
#agent section producer.sources = s producer.channels = c producer.sinks = r #source section producer.sources.s.type = avro producer.sources.s.bind = spark1 producer.sources.s.port = 666 producer.sources.s.channels = c # Each sink's type must be defined producer.sinks.r.type = org.apache.flume.sink.kafka.KafkaSink producer.sinks.r.topic = mytopic producer.sinks.r.brokerList = spark1:9092,spark2:9092,spark3:9092 producer.sinks.r.requiredAcks = 1 producer.sinks.r.batchSize = 20 producer.sinks.r.channel = c1 #Specify the channel the sink should use producer.sinks.r.channel = c # Each channel's type is defined. producer.channels.c.type = org.apache.flume.channel.kafka.KafkaChannel producer.channels.c.capacity = 10000 producer.channels.c.transactionCapacity = 1000 producer.channels.c.brokerList=spark1:9092,spark2:9092,spark3:9092 producer.channels.c.topic=channel1 producer.channels.c.zookeeperConnect=spark1:2181,spark2:2181,spark3:2181
關(guān)鍵是指定如源為接收網(wǎng)絡(luò)端口的666來的數(shù)據(jù),并輸入kafka的集群,需配置好topic及zk的地址
啟動(dòng)接收端腳本:
bin/flume-ng agent --conf conf --conf-file conf/receive.conf --name producer -Dflume.root.logger=INFO,console
4、spark集群處理接收數(shù)據(jù)
import org.apache.spark.SparkConf import org.apache.spark.SparkContext import org.apache.spark.streaming.kafka.KafkaUtils import org.apache.spark.streaming.Seconds import org.apache.spark.streaming.StreamingContext import kafka.serializer.StringDecoder import scala.collection.immutable.HashMap import org.apache.log4j.Level import org.apache.log4j.Logger /** * @author Administrator */ object KafkaDataTest { def main(args: Array[String]): Unit = { Logger.getLogger("org.apache.spark").setLevel(Level.WARN); Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.ERROR); val conf = new SparkConf().setAppName("stocker").setMaster("local[2]") val sc = new SparkContext(conf) val ssc = new StreamingContext(sc, Seconds(1)) // Kafka configurations val topics = Set("mytopic") val brokers = "spark1:9092,spark2:9092,spark3:9092" val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers, "serializer.class" -> "kafka.serializer.StringEncoder") // Create a direct stream val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics) val urlClickLogPairsDStream = kafkaStream.flatMap(_._2.split(" ")).map((_, 1)) val urlClickCountDaysDStream = urlClickLogPairsDStream.reduceByKeyAndWindow( (v1: Int, v2: Int) => { v1 + v2 }, Seconds(60), Seconds(5)); urlClickCountDaysDStream.print(); ssc.start() ssc.awaitTermination() } }
spark-streaming接收到kafka集群后的數(shù)據(jù),每5s計(jì)算60s內(nèi)的wordcount值
5、測(cè)試結(jié)果
往日志中依次追加三次日志
spark-streaming處理結(jié)果如下:
(hive,1)
(spark,2)
(hadoop,2)
(storm,1)
---------------------------------------
(hive,1)
(spark,3)
(hadoop,3)
(storm,1)
---------------------------------------
(hive,2)
(spark,5)
(hadoop,5)
(storm,2)
與預(yù)期一樣,充分體現(xiàn)了spark-streaming滑動(dòng)窗口的特性
關(guān)于 基于Flume+Kafka+Spark-Streaming的實(shí)時(shí)流式處理過程是怎樣的就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,可以學(xué)到更多知識(shí)。如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到。
文章題目:基于Flume+Kafka+Spark-Streaming的實(shí)時(shí)流式處理過程是怎樣的
文章起源:http://jinyejixie.com/article20/ggcpco.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供定制開發(fā)、做網(wǎng)站、營(yíng)銷型網(wǎng)站建設(shè)、網(wǎng)站內(nèi)鏈、域名注冊(cè)、面包屑導(dǎo)航
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)