SpringBoot教程(三十二) | SpringBoot集成Skywalking链路跟踪
- 一、Skywalking是什么?
- 二、Skywalking与JDK版本的对应关系
- 三、Skywalking下载
- 四、Skywalking 数据存储
- 五、Skywalking 的启动
- 六、部署探针
-
- 前提: Agents 8.9.0 放入 项目工程
- 方式一:IDEA 部署探针
- 方式二:Java 命令行启动方式
- 方式三:编写sh脚本启动(linux环境)
- 七、Springboot 的启动
-
- IDEA 部署探针方式启动
- Skywalking 进行日志配置
- 实现入参、返参都可查看
-
- 方式一:通过 Agent 配置实现 (有缺点)
- 方式二:通过 trace 和 Filter 实现
- 方式三:通过 trace 和 Aop 去实现
- 方式一:通过 Agent 配置实现 (有缺点)
一、Skywalking是什么?
SkyWalking是一个开源的、用于观测分布式系统(特别是微服务、云原生和容器化应用)的平台。
它提供了对分布式系统的追踪、监控和诊断能力。
二、Skywalking与JDK版本的对应关系
SkyWalking 8.x版本要求Java版本至少为8(即JDK 1.8),
SkyWalking 9.x版本则要求Java版本至少为11(即JDK 11)
所以选择的时候需要注意一下JDK版本。
三、Skywalking下载
Skywalking 官网下载地址 https://skywalking.apache.org/downloads/

- 其他的版本的 APM 地址
https://archive.apache.org/dist/skywalking/ - 其他的java 版本的 Agents 地址
https://archive.apache.org/dist/skywalking/java-agent/
注意点:
7.x及以下版本 APM 包里面有包括 Agents,但是8.x的就发现被分开了,所以8.x的及以上的 就需要 Agents 也得下载
目前该文选择 下载 APM 8.9.1 和 Agents 8.9.0 后解压

四、Skywalking 数据存储
Skywalking 存在多种数据存储
- h2(默认的存储方式,重启后数据会丢失)
- Elasticsearch (最常用的数据存储方式)
- MySQL
- TiDB
- …
相关文件OAP 配置文件(config/application.yml)
我只截取了关于设置存储方式的部分
1storage: 2 selector: ${SW_STORAGE:h2} 3 elasticsearch: 4 namespace: ${SW_NAMESPACE:""} 5 clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200} 6 protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"} 7 connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:500} 8 socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000} 9 numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0} 10 user: ${SW_ES_USER:""} 11 password: ${SW_ES_PASSWORD:""} 12 trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""} 13 trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""} 14 secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool. 15 dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index. 16 indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes 17 indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes 18 # Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es. 19 superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0 20 superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces. 21 superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0. 22 indexTemplateOrder: ${SW_STORAGE_ES_INDEX_TEMPLATE_ORDER:0} # the order of index template 23 bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests 24 # flush the bulk every 10 seconds whatever the number of requests 25 # INT(flushInterval * 2/3) would be used for index refresh period. 26 flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:15} 27 concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests 28 resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000} 29 metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000} 30 segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200} 31 profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200} 32 oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{"analyzer":{"oap_analyzer":{"type":"stop"}}}"} # the oap analyzer. 33 oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{"analyzer":{"oap_log_analyzer":{"type":"standard"}}}"} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc. 34 advanced: ${SW_STORAGE_ES_ADVANCED:""} 35 h2: 36 driver: ${SW_STORAGE_H2_DRIVER:org.h2.jdbcx.JdbcDataSource} 37 url: ${SW_STORAGE_H2_URL:jdbc:h2:mem:skywalking-oap-db;DB_CLOSE_DELAY=-1} 38 user: ${SW_STORAGE_H2_USER:sa} 39 metadataQueryMaxSize: ${SW_STORAGE_H2_QUERY_MAX_SIZE:5000} 40 maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20} 41 numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2} 42 maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:100} 43 asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:1} 44 mysql: 45 properties: 46 jdbcUrl: ${SW_JDBC_URL:"jdbc:mysql://localhost:3306/swtest?rewriteBatchedStatements=true"} 47 dataSource.user: ${SW_DATA_SOURCE_USER:root} 48 dataSource.password: ${SW_DATA_SOURCE_PASSWORD:root@1234} 49 dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true} 50 dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250} 51 dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048} 52 dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true} 53 metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000} 54 maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20} 55 numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2} 56 maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000} 57 asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4} 58 tidb: 59 properties: 60 jdbcUrl: ${SW_JDBC_URL:"jdbc:mysql://localhost:4000/tidbswtest?rewriteBatchedStatements=true"} 61 dataSource.user: ${SW_DATA_SOURCE_USER:root} 62 dataSource.password: ${SW_DATA_SOURCE_PASSWORD:""} 63 dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true} 64 dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250} 65 dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048} 66 dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true} 67 dataSource.useAffectedRows: ${SW_DATA_SOURCE_USE_AFFECTED_ROWS:true} 68 metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000} 69 maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20} 70 numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2} 71 maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000} 72 asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4} 73 influxdb: 74 # InfluxDB configuration 75 url: ${SW_STORAGE_INFLUXDB_URL:http://localhost:8086} 76 user: ${SW_STORAGE_INFLUXDB_USER:root} 77 password: ${SW_STORAGE_INFLUXDB_PASSWORD:} 78 database: ${SW_STORAGE_INFLUXDB_DATABASE:skywalking} 79 actions: ${SW_STORAGE_INFLUXDB_ACTIONS:1000} # the number of actions to collect 80 duration: ${SW_STORAGE_INFLUXDB_DURATION:1000} # the time to wait at most (milliseconds) 81 batchEnabled: ${SW_STORAGE_INFLUXDB_BATCH_ENABLED:true} 82 fetchTaskLogMaxSize: ${SW_STORAGE_INFLUXDB_FETCH_TASK_LOG_MAX_SIZE:5000} # the max number of fetch task log in a request 83 connectionResponseFormat: ${SW_STORAGE_INFLUXDB_CONNECTION_RESPONSE_FORMAT:MSGPACK} # the response format of connection to influxDB, cannot be anything but MSGPACK or JSON. 84 postgresql: 85 properties: 86 jdbcUrl: ${SW_JDBC_URL:"jdbc:postgresql://localhost:5432/skywalking"} 87 dataSource.user: ${SW_DATA_SOURCE_USER:postgres} 88 dataSource.password: ${SW_DATA_SOURCE_PASSWORD:123456} 89 dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true} 90 dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250} 91 dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048} 92 dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true} 93 metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000} 94 maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20} 95 numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2} 96 maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000} 97 asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4} 98 zipkin-elasticsearch: 99 namespace: ${SW_NAMESPACE:""} 100 clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200} 101 protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"} 102 trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""} 103 trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""} 104 dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index. 105 indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes 106 indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes 107 # Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es. 108 superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0 109 superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces. 110 superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0. 111 user: ${SW_ES_USER:""} 112 password: ${SW_ES_PASSWORD:""} 113 secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool. 114 bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests 115 # flush the bulk every 10 seconds whatever the number of requests 116 # INT(flushInterval * 2/3) would be used for index refresh period. 117 flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:15} 118 concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests 119 resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000} 120 metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000} 121 segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200} 122 profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200} 123 oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{"analyzer":{"oap_analyzer":{"type":"stop"}}}"} # the oap analyzer. 124 oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{"analyzer":{"oap_log_analyzer":{"type":"standard"}}}"} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc. 125 advanced: ${SW_STORAGE_ES_ADVANCED:""} 126 iotdb: 127 host: ${SW_STORAGE_IOTDB_HOST:127.0.0.1} 128 rpcPort: ${SW_STORAGE_IOTDB_RPC_PORT:6667} 129 username: ${SW_STORAGE_IOTDB_USERNAME:root} 130 password: ${SW_STORAGE_IOTDB_PASSWORD:root} 131 storageGroup: ${SW_STORAGE_IOTDB_STORAGE_GROUP:root.skywalking} 132 sessionPoolSize: ${SW_STORAGE_IOTDB_SESSIONPOOL_SIZE:16} 133 fetchTaskLogMaxSize: ${SW_STORAGE_IOTDB_FETCH_TASK_LOG_MAX_SIZE:1000} # the max number of fetch task log in a request 134
五、Skywalking 的启动
进入 D:apache-skywalking-apm-8.9.1apache-skywalking-apm-binin ,双击运行 startup.bat(用管理员方式启动),会开启两个命令行窗口。
- (1)Skywalking-Collector:追踪信息收集器,通过 gRPC/Http 收集客户端的采集信息 。Http默认端口 12800,gRPC默认端口 11800。(如需要修改,可前往 apache-skywalking-apm-binconfigapplicaiton.yml 进行修改)
- (2)Skywalking-Webapp:管理平台页面 默认端口 8080 (如需要修改,可前往 apache-skywalking-apm-binwebappwebapp.yml 进行修改)
启动图如下:

接着浏览器Skywalking访问:http://localhost:8080/
这个右边有个自动刷新的按钮,一定要启动起来
不然到时候,springboot工程启动以后,你以为没有连接成功(F5刷新页面是没有用的)

六、部署探针
前提: Agents 8.9.0 放入 项目工程
也不说放其他位置不好,不过放到项目里面更好一点,后面你就能感受到便利了

方式一:IDEA 部署探针
修改启动类的 VM options(虚拟机选项)配置


配置的jvm参数如下:
1-javaagent:D:ideaObjectreactBootspringboot-fullsrcmainskywalking-agentskywalking-agent.jar 2-Dskywalking.agent.service_name=woqu-ndy 3-Dskywalking.collector.backend_service=127.0.0.1:11800 4
- javaagent: 表示 skywalking‐agent.jar的本地磁盘的路径
(我这边是放到项目里面了)
-Dskywalking.agent.service_name:表示在skywalking上显示的服务名
-Dskywalking.collector.backend_service:表示skywalking的collector服务的IP及端口- 注意:-Dskywalking.collector.backend_service 可以指定远程地址, 但是 javaagent 必须绑定你本机物理路径的 skywalking-agent.jar
方式二:Java 命令行启动方式
1java -javaagent:D:ideaObjectreactBootspringboot-fullsrcmainskywalking-agentskywalking-agent.jar=-Dskywalking.agent.service_name=service-myapp,-Dskywalking.collector.backend_service=localhost:11800 -jar service-myapp.jar 2
方式三:编写sh脚本启动(linux环境)
1#!/bin/bash 2 3# 设置 SkyWalking Agent 的路径 4AGENT_PATH="/home/yourusername/Desktop/apache-skywalking-apm-6.6.0/apache-skywalking-apm-bin/agent" 5 6# 设置 Java 应用的 JAR 文件路径 7JAR_PATH="/path/to/your/service-myapp.jar" 8 9# 设置 SkyWalking 服务名称和 Collector 后端服务地址 10SERVICE_NAME="service-myapp" 11COLLECTOR_BACKEND_SERVICE="localhost:11800" 12 13# 构造 Java Agent 参数 14JAVA_AGENT="-javaagent:$AGENT_PATH/skywalking-agent.jar 15 -Dskywalking.agent.service_name=$SERVICE_NAME 16 -Dskywalking.collector.backend_service=$COLLECTOR_BACKEND_SERVICE" 17 18# 启动 Java 应用 19java $JAVA_AGENT -jar $JAR_PATH 20
七、Springboot 的启动
IDEA 部署探针方式启动
启动后,控制台日志输出开头出现了以下的记录,就表示连接上Skywalking了

再看 Skywalking(http://localhost:8080/) 页面那边,你就会发现有个这个图(表示连接上了)

我们再请求一下 Controller 的接口,就会发现捕获了相关接口记录
(但是目前,还是没有接口具体详细的日志入参或者出参的)


Skywalking 进行日志配置
为log日志增加 skywalking的 traceId(追踪ID)。便于排查
首先引入maven依赖
1 <!-- SkyWalking 的日志工具包 --> 2<dependency> 3 <groupId>org.apache.skywalking</groupId> 4 <artifactId>apm-toolkit-logback-1.x</artifactId> 5 <version>9.0.0</version> 6</dependency> 7
接着在 resources文件夹下创建 logback-spring.xml文件
1<?xml version="1.0" encoding="UTF-8"?> 2<configuration debug="false"> 3 4 <!--定义日志文件的存储地址 勿在 LogBack 的配置中使用相对路径--> 5 <property name="LOG_HOME" value="D:/logs/" ></property> 6 7 <!-- 彩色日志 --> 8 <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" /> 9 10 <!--控制台日志, 控制台输出 --> 11 <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> 12 <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> 13 <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> 14 <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符--> 15 <pattern>%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} [%X{tid}] %clr([%-10.10thread]){faint} %clr(%-5level) %clr(%-50.50logger{50}:%-3L){cyan} %clr(-){faint} %msg%n</pattern> 16 </layout> 17 </encoder> 18 </appender> 19 20 <!--文件日志, 按照每天生成日志文件 (只能是 由 Logger 或者 LoggerFactory 记录的日志消息哦)--> 21 <!--以下关于 日志文件的pattern 需要去掉颜色,防止出现 ANSI转义序列--> 22 <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender"> 23 <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> 24 <!--日志文件输出的文件名--> 25 <FileNamePattern>${LOG_HOME}/%d{yyyy-MM-dd}/pro.log</FileNamePattern> 26 <!--日志文件保留天数--> 27 <MaxHistory>30</MaxHistory> 28 </rollingPolicy> 29 <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> 30 <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> 31 <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符--> 32 <!-- <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>--> 33 <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%-10.10thread] %-5level %-50.50logger{50}:%-3L - %msg%n</pattern> 34 </layout> 35 </encoder> 36 <!--日志文件最大的大小--> 37 <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy"> 38 <MaxFileSize>10MB</MaxFileSize> 39 </triggeringPolicy> 40 </appender> 41 42 <!--skywalking grpc 日志收集--> 43 <appender name="grpc" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender"> 44 <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> 45 <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.mdc.TraceIdMDCPatternLogbackLayout"> 46 <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{tid}] [%thread] %-5level %logger{36} -%msg%n</Pattern> 47 </layout> 48 </encoder> 49 </appender> 50 51 52 <!-- 日志输出级别 --> 53 <root level="INFO"> 54 <appender-ref ref="STDOUT" ></appender-ref> 55 <appender-ref ref="FILE" ></appender-ref> 56 <appender-ref ref="grpc"/> 57 </root> 58</configuration> 59
请求接口就可以发现TID的输出
(在这里是882c67dc859046c398fbfc5725df9de0.109.17288962842340001)

然后把它放到 追踪 栏目的追踪id ,可以查到记录

然后把它放到 日志 栏目的追踪id ,可以查到记录

实现入参、返参都可查看
方式一:通过 Agent 配置实现 (有缺点)
首先,你需要确认SkyWalking的Agent配置。
SkyWalking的Agent在启动时会读取配置文件,通常是agent.config。
默认情况下,请求参数的采集是关闭的,你需要手动开启。
具体步骤如下:
在你的SkyWalking Agent配置文件agent.config中,找到plugin部分,确保以下配置项设置为true:
1plugin.tomcat.collect_http_params=${SW_PLUGIN_TOMCAT_COLLECT_HTTP_PARAMS:true} 2plugin.springmvc.collect_http_params=${SW_PLUGIN_SPRINGMVC_COLLECT_HTTP_PARAMS:true} 3plugin.httpclient.collect_http_params=${SW_PLUGIN_HTTPCLIENT_COLLECT_HTTP_PARAMS:true} 4
缺点:可是以上设置,只能开启get请求的入参采集,post无法获取到,这个方式不怎么好
方式二:通过 trace 和 Filter 实现
一、引入追踪工具包
1<!-- SkyWalking 追踪工具包 --> 2<dependency> 3 <groupId>org.apache.skywalking</groupId> 4 <artifactId>apm-toolkit-trace</artifactId> 5 <version>9.0.0</version> 6</dependency> 7
二、使用 HttpFilter 和 ContentCachingRequestWrapper
知识小贴士:为什么不用HttpServletRequest?
如果直接把HttpServletRequest中的InputStream读取后输出日志,会导致后续业务逻辑读取不到InputStream中的内容,因为流只能读取一次。
1package com.example.springbootfull.quartztest.Filter; 2 3import lombok.extern.slf4j.Slf4j; 4import org.apache.skywalking.apm.toolkit.trace.ActiveSpan; 5import org.springframework.stereotype.Component; 6import org.springframework.util.StringUtils; 7import org.springframework.web.util.ContentCachingRequestWrapper; 8import org.springframework.web.util.ContentCachingResponseWrapper; 9 10import javax.servlet.FilterChain; 11import javax.servlet.ServletException; 12import javax.servlet.http.HttpFilter; 13import javax.servlet.http.HttpServletRequest; 14import javax.servlet.http.HttpServletResponse; 15import java.io.IOException; 16import java.nio.charset.StandardCharsets; 17import java.util.Enumeration; 18import java.util.HashSet; 19import java.util.Set; 20import java.util.stream.Collectors; 21 22@Slf4j 23@Component 24public class ApmHttpInfo extends HttpFilter { 25 //被忽略的头部信息 26 private static final Set<String> IGNORED_HEADERS; 27 static { 28 Set<String> ignoredHeaders = new HashSet<>(); 29 ignoredHeaders.addAll( 30 java.util.Arrays.asList( 31 "Content-Type", 32 "User-Agent", 33 "Accept", 34 "Cache-Control", 35 "Postman-Token", 36 "Host", 37 "Accept-Encoding", 38 "Connection", 39 "Content-Length" 40 ).stream() 41 .map(String::toUpperCase) 42 .collect(Collectors.toList()) 43 ); 44 IGNORED_HEADERS = ignoredHeaders; 45 } 46 47 @Override 48 public void doFilter(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws IOException, ServletException { 49 ContentCachingRequestWrapper requestWrapper = new ContentCachingRequestWrapper(request); 50 ContentCachingResponseWrapper responseWrapper = new ContentCachingResponseWrapper(response); 51 52 try { 53 filterChain.doFilter(requestWrapper, responseWrapper); 54 } finally { 55 try { 56 //构造请求信息: 比如 curl -X GET http://localhost:18080/getPerson?id=1 -H 'token: me-token' -d '{ "name": "hello" }' 57 //构造请求的方法&URL&参数 58 StringBuilder sb = new StringBuilder("curl") 59 .append(" -X ").append(request.getMethod()) 60 .append(" ").append(request.getRequestURL().toString()); 61 if (StringUtils.hasLength(request.getQueryString())) { 62 sb.append("?").append(request.getQueryString()); 63 } 64 65 //构造header 66 Enumeration<String> headerNames = request.getHeaderNames(); 67 while (headerNames.hasMoreElements()) { 68 String headerName = headerNames.nextElement(); 69 if (!IGNORED_HEADERS.contains(headerName.toUpperCase())) { 70 sb.append(" -H '").append(headerName).append(": ").append(request.getHeader(headerName)).append("'"); 71 } 72 } 73 74 //获取body 75 String body = new String(requestWrapper.getContentAsByteArray(), StandardCharsets.UTF_8); 76 if (StringUtils.hasLength(body)) { 77 sb.append(" -d '").append(body).append("'"); 78 } 79 //输出到input 80 ActiveSpan.tag("input", sb.toString()); 81 82 //获取返回值body 83 String responseBody = new String(responseWrapper.getContentAsByteArray(), StandardCharsets.UTF_8); 84 //输出到output 85 ActiveSpan.tag("output", responseBody); 86 } catch (Exception e) { 87 log.warn("fail to build http log", e); 88 } finally { 89 //这一行必须添加,否则就一直不返回 90 responseWrapper.copyBodyToResponse(); 91 } 92 } 93 } 94} 95
效果如下(get请求):

效果如下(post请求):

方式三:通过 trace 和 Aop 去实现
在此就不细说了,这个也是一种方案
参考文章
【1】skywalking环境搭建(windows)
【2】windows下安装skywalking 9.2
【3】skywalking9.1结合logback配置日志收集
【4】SpringBoot集成Skywalking日志收集
【5】skywalking展示http请求和响应
《SpringBoot教程(三十二) SpringBoot集成Skywalking链路跟踪》 是转载文章,点击查看原文。