[研究]更新 Splunk 8.1.1 後,發現 log 沒進來之解決 (Index 爆滿)
2020-12-30
更新 Splunk 8.1.1 後,發現 log 沒進來。
df 發現磁碟空間 99%。
參考
https://docs.splunk.com/Documentation/Splunk/8.1.1/Indexer/Setaretirementandarchivingpolicy
https://docs.splunk.com/Documentation/Splunk/9.0.1/Admin/Indexesconf
發現舊設定 /opt/splunk/etc/system/local/indexs.conf 不見了,可能升級整個目錄被砍掉重新安裝。
把 /opt/splunk/etc/system/default/indexs.conf 複製過來,設定可寫入,再次把預設值 6 年重新把值設回180天。(因為磁碟空間有限)
cp /opt/splunk/etc/system/default/indexes.conf /opt/splunk/etc/system/local/indexes.conf
chmod a+w /opt/splunk/etc/system/local/indexes.conf
vi /opt/splunk/etc/system/local/indexes.conf
[main]
frozenTimePeriodInSecs = 15552000
預設值188697600秒/60/60/24/365=6年
新設值15552000秒/60/60/24=180天
reboot 作業系統 Linux 後,log又進來了。df 看到也不是99%了。
********************************************************************************
補:indexes.conf 完整內容
[default] sync = 0 memPoolMB = auto defaultDatabase = main enableRealtimeSearch = true suppressBannerList = maxRunningProcessGroups = 8 maxRunningProcessGroupsLowPriority = 1 bucketRebuildMemoryHint = auto serviceOnlyAsNeeded = true serviceSubtaskTimingPeriod = 30 serviceInactiveIndexesPeriod = 60 maxBucketSizeCacheEntries = 0 processTrackerServiceInterval = 1 hotBucketTimeRefreshInterval = 10 rtRouterThreads = 0 rtRouterQueueSize = 10000 selfStorageThreads = 2 fileSystemExecutorWorkers = 5 hotBucketStreaming.extraBucketBuildingCmdlineArgs = maxDataSize = auto maxWarmDBCount = 300 frozenTimePeriodInSecs = 15552000 rotatePeriodInSecs = 60 coldToFrozenScript = coldToFrozenDir = compressRawdata = true maxTotalDataSizeMB = 500000 maxGlobalRawDataSizeMB = 0 maxGlobalDataSizeMB = 0 maxConcurrentOptimizes = 6 maxHotSpanSecs = 7776000 maxHotIdleSecs = 0 maxHotBuckets = auto metric.maxHotBuckets = auto minHotIdleSecsBeforeForceRoll = auto quarantinePastSecs = 77760000 quarantineFutureSecs = 2592000 rawChunkSizeBytes = 131072 minRawFileSyncSecs = disable assureUTF8 = false serviceMetaPeriod = 25 partialServiceMetaPeriod = 0 throttleCheckPeriod = 15 syncMeta = true maxMetaEntries = 1000000 maxBloomBackfillBucketAge = 30d enableOnlineBucketRepair = true enableDataIntegrityControl = false maxTimeUnreplicatedWithAcks = 60 maxTimeUnreplicatedNoAcks = 300 minStreamGroupQueueSize = 2000 warmToColdScript = tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary homePath.maxDataSizeMB = 0 coldPath.maxDataSizeMB = 0 streamingTargetTsidxSyncPeriodMsec = 5000 journalCompression = gzip enableTsidxReduction = false suspendHotRollByDeleteQuery = false tsidxReductionCheckPeriodInSec = 600 timePeriodInSecBeforeTsidxReduction = 604800 datatype = event splitByIndexKeys = metric.splitByIndexKeys = tsidxWritingLevel = 1 archiver.enableDataArchive = false archiver.maxDataArchiveRetentionPeriod = 0 hotBucketStreaming.sendSlices = false hotBucketStreaming.removeRemoteSlicesOnRoll = false hotBucketStreaming.reportStatus = false hotBucketStreaming.deleteHotsAfterRestart = false tsidxTargetSizeMB = 1500 metric.tsidxTargetSizeMB = 1500 metric.enableFloatingPointCompression = true metric.compressionBlockSize = 1024 metric.stubOutRawdataJournal = true metric.timestampResolution = s waitPeriodInSecsForManifestWrite = 60 repFactor = 0 [_audit] homePath = $SPLUNK_DB/audit/db coldPath = $SPLUNK_DB/audit/colddb thawedPath = $SPLUNK_DB/audit/thaweddb tstatsHomePath = volume:_splunk_summaries/audit/datamodel_summary [_internal] homePath = $SPLUNK_DB/_internaldb/db coldPath = $SPLUNK_DB/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb tstatsHomePath = volume:_splunk_summaries/_internaldb/datamodel_summary maxDataSize = 1000 maxHotSpanSecs = 432000 frozenTimePeriodInSecs = 2592000 [_introspection] homePath = $SPLUNK_DB/_introspection/db coldPath = $SPLUNK_DB/_introspection/colddb thawedPath = $SPLUNK_DB/_introspection/thaweddb maxDataSize = 1024 frozenTimePeriodInSecs = 1209600 [_metrics] homePath = $SPLUNK_DB/_metrics/db coldPath = $SPLUNK_DB/_metrics/colddb thawedPath = $SPLUNK_DB/_metrics/thaweddb datatype = metric frozenTimePeriodInSecs = 1209600 metric.splitByIndexKeys = metric_name [_metrics_rollup] homePath = $SPLUNK_DB/_metrics_rollup/db coldPath = $SPLUNK_DB/_metrics_rollup/colddb thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb datatype = metric frozenTimePeriodInSecs = 63072000 metric.splitByIndexKeys = metric_name [_telemetry] homePath = $SPLUNK_DB/_telemetry/db coldPath = $SPLUNK_DB/_telemetry/colddb thawedPath = $SPLUNK_DB/_telemetry/thaweddb maxDataSize = 256 frozenTimePeriodInSecs = 63072000 [_thefishbucket] homePath = $SPLUNK_DB/fishbucket/db coldPath = $SPLUNK_DB/fishbucket/colddb thawedPath = $SPLUNK_DB/fishbucket/thaweddb tstatsHomePath = volume:_splunk_summaries/fishbucket/datamodel_summary maxDataSize = 500 frozenTimePeriodInSecs = 2419200 [history] homePath = $SPLUNK_DB/historydb/db coldPath = $SPLUNK_DB/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb tstatsHomePath = volume:_splunk_summaries/historydb/datamodel_summary maxDataSize = 10 frozenTimePeriodInSecs = 604800 [main] homePath = $SPLUNK_DB/defaultdb/db coldPath = $SPLUNK_DB/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary maxConcurrentOptimizes = 6 maxHotIdleSecs = 86400 maxHotBuckets = 10 maxDataSize = auto_high_volume [provider-family:hadoop] vix.mode = report vix.command = $SPLUNK_HOME/bin/jars/sudobash vix.command.arg.1 = $HADOOP_HOME/bin/hadoop vix.command.arg.2 = jar vix.command.arg.3 = $SPLUNK_HOME/bin/jars/SplunkMR-h1.jar vix.command.arg.4 = com.splunk.mr.SplunkMR vix.env.MAPREDUCE_USER = vix.env.HADOOP_HEAPSIZE = 512 vix.env.HADOOP_CLIENT_OPTS = -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr vix.env.HUNK_THIRDPARTY_JARS = $SPLUNK_HOME/bin/jars/thirdparty/common/avro-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/avro-mapred-1.7.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-compress-1.19.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/commons-io-2.4.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/libfb303-0.9.2.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/parquet-hive-bundle-1.10.1.jar,$SPLUNK_HOME/bin/jars/thirdparty/common/snappy-java-1.1.1.7.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-exec-0.12.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-metastore-0.12.0.jar,$SPLUNK_HOME/bin/jars/thirdparty/hive/hive-serde-0.12.0.jar vix.mapred.job.reuse.jvm.num.tasks = 100 vix.mapred.child.java.opts = -server -Xmx512m -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr vix.mapred.reduce.tasks = 0 vix.mapred.job.map.memory.mb = 2048 vix.mapred.job.reduce.memory.mb = 512 vix.mapred.job.queue.name = default vix.mapreduce.job.jvm.numtasks = 100 vix.mapreduce.map.java.opts = -server -Xmx512m -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr vix.mapreduce.reduce.java.opts = -server -Xmx512m -XX:ParallelGCThreads=4 -XX:+UseParallelGC -XX:+DisplayVMOutputToStderr vix.mapreduce.job.reduces = 0 vix.mapreduce.map.memory.mb = 2048 vix.mapreduce.reduce.memory.mb = 512 vix.mapreduce.job.queuename = default vix.splunk.search.column.filter = 1 vix.splunk.search.mixedmode = 1 vix.splunk.search.debug = 0 vix.splunk.search.mr.maxsplits = 10000 vix.splunk.search.mr.minsplits = 100 vix.splunk.search.mr.splits.multiplier = 10 vix.splunk.search.mr.poll = 2000 vix.splunk.search.recordreader = SplunkJournalRecordReader,ValueAvroRecordReader,SimpleCSVRecordReader,SequenceFileRecordReader vix.splunk.search.recordreader.avro.regex = \.avro$ vix.splunk.search.recordreader.csv.regex = \.([tc]sv)(?:\.(?:gz|bz2|snappy))?$ vix.splunk.search.recordreader.sequence.regex = \.seq$ vix.splunk.home.datanode = /tmp/splunk/$SPLUNK_SERVER_NAME/ vix.splunk.heartbeat = 1 vix.splunk.heartbeat.threshold = 60 vix.splunk.heartbeat.interval = 1000 vix.splunk.setup.onsearch = 1 vix.splunk.setup.package = current [splunklogger] homePath = $SPLUNK_DB/splunklogger/db coldPath = $SPLUNK_DB/splunklogger/colddb thawedPath = $SPLUNK_DB/splunklogger/thaweddb disabled = true [summary] homePath = $SPLUNK_DB/summarydb/db coldPath = $SPLUNK_DB/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb tstatsHomePath = volume:_splunk_summaries/summarydb/datamodel_summary [volume:_splunk_summaries] path = $SPLUNK_DB |
不清楚 $SPLUNK_HOME 和 $SPLUNK_DB 是甚麼值?
[root@aplog local]# echo $SPLUNK_DB [root@aplog local]# echo $SPLUNK_HOME [root@aplog local]# |
cat /opt/splunk/etc/splunk-launch.conf.default
# Version 8.2.6
# Modify the following line to suit the location of your Splunk install.
# If unset, Splunk will use the parent of the directory containing the splunk
# CLI executable.
#
# SPLUNK_HOME=/opt/splunk-home
# By default, Splunk stores its indexes under SPLUNK_HOME in the
# var/lib/splunk subdirectory. This can be overridden
# here:
#
# SPLUNK_DB=/opt/splunk-home/var/lib/splunk
# Splunkd daemon name
SPLUNK_SERVER_NAME=Splunkd
# If SPLUNK_OS_USER is set, then Splunk service will only start
# if the 'splunk [re]start [splunkd]' command is invoked by a user who
# is, or can effectively become via setuid(2), $SPLUNK_OS_USER.
# (This setting can be specified as username or as UID.)
#
# SPLUNK_OS_USER
|
cat /opt/splunk/etc/splunk-launch.conf
# Copyright (C) 2005-2011 Splunk Inc. All Rights Reserved. Version 4.2.3
# Modify the following line to suit the location of your Splunk install.
# If unset, Splunk will use the parent of the directory this configuration
# file was found in
#
SPLUNK_HOME=/opt/splunk
# By default, Splunk stores its indexes under SPLUNK_HOME in the
# var/lib/splunk subdirectory. This can be overridden
# here:
#
# SPLUNK_DB=/opt/splunk/var/lib/splunk
# Splunkd daemon name
SPLUNK_SERVER_NAME=splunkd
# Splunkweb daemon name
SPLUNK_WEB_NAME=splunkweb
|
所以實際上 index DB 存放在 SPLUNK_DB=/opt/splunk/var/lib/splunk
(完)
沒有留言:
張貼留言