成人午夜视频全免费观看高清-秋霞福利视频一区二区三区-国产精品久久久久电影小说-亚洲不卡区三一区三区一区

k8s容器環(huán)境收集應用日志到已有的ELK日志平臺-創(chuàng)新互聯(lián)

Tags: k8s環(huán)境下的容器日志收集
K8S環(huán)境下面如何收集應用日志
===
在本文中重點講一下K8S容器環(huán)境中如何收集容器的日志;

創(chuàng)新互聯(lián)建站長期為上千多家客戶提供的網(wǎng)站建設服務,團隊從業(yè)經(jīng)驗10年,關注不同地域、不同群體,并針對不同對象提供差異化的產(chǎn)品和服務;打造開放共贏平臺,與合作伙伴共同營造健康的互聯(lián)網(wǎng)生態(tài)環(huán)境。為子洲企業(yè)提供專業(yè)的成都做網(wǎng)站、網(wǎng)站設計,子洲網(wǎng)站改版等技術服務。擁有10多年豐富建站經(jīng)驗和眾多成功案例,為您定制開發(fā)。

1. 容器日志收集方案的選擇:

??在K8S集群中,容器的日志收集方案一般有三種;第一種方案是通過在每一個k8s節(jié)點安裝日志收集客戶端軟件,比如fluentd。這種方案不好的一點是應用的日志必須輸出到標準輸出,并且是通過在每一臺計算節(jié)點的/var/log/containers目錄下面的日志文件,這個日志文件的名稱是這種格式user-center-765885677f-j68zt_default_user-center-0867b9c2f8ede64cebeb359dd08a6b05f690d50427aa89f7498597db8944cccc.log,文件名稱有很多隨機字符串,很難和容器里面的應用對應起來。并且在網(wǎng)上看到別人說這個里面的日志,對于JAVA的報錯內(nèi)容沒有多行合并,不過我還沒有測試過此方案。

??第二種方案就是在應用的pods里面在運行一個sidecar container(邊角容器),這個容器會和應用的容器掛載同一個volume日志卷。比如這個sidecar容器可以是filebeat或者flunetd等;這種方案不足之處是部署了sidecar , 所以會消耗資源 , 每個pod都要起一個日志收集容器。
??第三種方案就是直接將應用的日志收集到kafka,然后通過kafka再發(fā)送到logstash,再處理成json格式的日志發(fā)送到es集群,最后在kibana展示。我實驗的就是這種方案。通過修改logsbak配置文件實現(xiàn)了日志直接發(fā)送到kafka緩存的功能;下面直接看配置了

1. logsbak配置:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <jmxConfigurator/>  <!-- 動態(tài)加載-->

    <property name="log-path" value="/apptestlogs" />  <!-- 統(tǒng)一 /applogs 下面 -->
    <property name="app-name" value="test" />  <!-- 應用系統(tǒng)名稱 -->
    <property name="filename" value="test-test" />  <!---日志文件名,默認組件名稱 -->
    <property name="dev-group-name" value="test" /> <!-- 開發(fā)團隊名稱 -->

    <conversionRule conversionWord="traceId"  converterClass="org.lsqt.components.log.logback.TraceIdConvert"/>

    <!-- 根據(jù)實際情況修改變量 end-->
    -<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
    <!-- 典型的日志pattern -->
        <!-- -<encoder>-->
          <!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>-->
        <!--</encoder>-->
    -<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
    <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
        <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
    </layout>
    </encoder>
    </appender>

    -<appender name="fileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${log-path}/${app-name}/${filename}.log</file>
    -<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    <fileNamePattern>/${log-path}/${app-name}/${filename}.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
    <maxHistory>15</maxHistory>
    <!--用來指定日志文件的上限大小,例如設置為300M的話,那么到了這個值,就會刪除舊的日志。-->
    <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
        <maxFileSize>300MB</maxFileSize>
    </timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
    <!-- -<encoder>-->
    <!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>-->
<!--</encoder>-->
    -<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
    <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
        <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
    </layout>
</encoder>
</appender>
    <appender name="errorAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log-path}/${app-name}/${filename}-error.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>/${log-path}/${app-name}/${filename}-error.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>300MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
            <maxHistory>15</maxHistory>
        </rollingPolicy>
        <!--<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">-->
            <!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>-->
        <!--</encoder>-->
        <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
        <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
            <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
        </layout>
    </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <!-- This example configuration is probably most unreliable under
    failure conditions but wont block your application at all -->
    <appender name="very-relaxed-and-fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
           <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
        </encoder>
        <topic>elk-stand-sit-fkp-eureka</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />

        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=192.168.1.12:9092,192.168.1.14:9092,192.168.1.15:9092</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
        <!-- define All log messages that cannot be delivered fast enough will then immediately go to the fallback appenders -->
        <producerConfig>block.on.buffer.full=false</producerConfig>

         <!-- this is the fallback appender if kafka is not available. -->
        <appender-ref ref="consoleAppender" />
    </appender>

    <root level="debug">
        <appender-ref ref="very-relaxed-and-fast-kafka-appender" /> 
        <appender-ref ref="fileAppender"/>
        <appender-ref ref="consoleAppender"/>
        <appender-ref ref="errorAppender"/>

    </root>
</configuration>

###2. 針對logsbak配置說明:###

  1. logsbak直接發(fā)送日志到kafka有幾種方式,一種是異步模式,一種是同步模式。異步模式的意思就是如果kafka因為網(wǎng)絡情況出現(xiàn)故障,則阻塞發(fā)送日志或者直接將日志發(fā)送到后備存儲,比如后備存儲是發(fā)送到日志文件;同步模式的意思就是即使kafka出現(xiàn)網(wǎng)絡情況不可達,則就會影響到日志線程,進而影響到應用的性能。不過這個模式的我沒有測試過;配置如下:
    <!-- This example configuration is more restrictive and will try to ensure that every message
     is eventually delivered in an ordered fashion (as long the logging application stays alive) -->
    <appender name="very-restrictive-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>

        <topic>important-logs</topic>
        <!-- ensure that every message sent by the executing host is partitioned to the same partition strategy -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
        <!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy">
            <!-- wait indefinitely until the kafka producer was able to send the message -->
            <timeout>0</timeout>
        </deliveryStrategy>

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <!-- restrict the size of the buffered batches to 8MB (default is 32MB) -->
        <producerConfig>buffer.memory=8388608</producerConfig>

        <!-- If the kafka broker is not online when we try to log, just block until it becomes available -->
        <producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive</producerConfig>
        <!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy  -->
        <producerConfig>compression.type=gzip</producerConfig>

        <!-- Log every log message that could not be sent to kafka to STDERR -->
        <appender-ref ref="STDERR"/>
    </appender>

通過配置logsbak直接輸出到kafka,并且使用異步模式,就成功的在kibana里面看到了容器的日志了;
k8s容器環(huán)境收集應用日志到已有的ELK日志平臺

博文的更詳細內(nèi)容請關注我的個人微信公眾號 “云時代IT運維”,本公眾號旨在共享互聯(lián)網(wǎng)運維新技術,新趨勢; 包括IT運維行業(yè)的咨詢,運維技術文檔分享。重點關注devops、jenkins、zabbix監(jiān)控、kubernetes、ELK、各種中間件的使用,比如redis、MQ等;shell和python等運維編程語言;本人從事IT運維相關的工作有十多年。2008年開始專職從事Linux/Unix系統(tǒng)運維工作;對運維相關技術有一定程度的理解。本公眾號所有博文均是我的實際工作經(jīng)驗總結,基本都是原創(chuàng)博文。我很樂意將我積累的經(jīng)驗、心得、技術與大家分享交流!希望和大家在IT運維職業(yè)道路上一起成長和進步;

k8s容器環(huán)境收集應用日志到已有的ELK日志平臺

另外有需要云服務器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內(nèi)外云服務器15元起步,三天無理由+7*72小時售后在線,公司持有idc許可證,提供“云服務器、裸金屬服務器、高防服務器、香港服務器、美國服務器、虛擬主機、免備案服務器”等云主機租用服務以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務可用性高、性價比高”等特點與優(yōu)勢,專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應用場景需求。

名稱欄目:k8s容器環(huán)境收集應用日志到已有的ELK日志平臺-創(chuàng)新互聯(lián)
分享URL:http://jinyejixie.com/article40/ceoceo.html

成都網(wǎng)站建設公司_創(chuàng)新互聯(lián),為您提供品牌網(wǎng)站建設、網(wǎng)站改版、搜索引擎優(yōu)化、網(wǎng)站設計網(wǎng)站維護、網(wǎng)頁設計公司

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉載內(nèi)容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉載,或轉載時需注明來源: 創(chuàng)新互聯(lián)

成都網(wǎng)站建設
定日县| 新田县| 藁城市| 巴林右旗| 林西县| 青河县| 南丰县| 印江| 韶关市| 新和县| 绩溪县| 茶陵县| 鹤山市| 简阳市| 佛坪县| 宕昌县| 行唐县| 会理县| 绿春县| 十堰市| 日照市| 绩溪县| 沽源县| 宜宾市| 洮南市| 新源县| 台山市| 阳城县| 洪雅县| 西城区| 菏泽市| 措美县| 藁城市| 湖州市| 阳江市| 肃宁县| 丹阳市| 上虞市| 丰城市| 绍兴市| 科技|