![Maven Central Adds Sigstore Signature Validation](https://cdn.sanity.io/images/cgdhsj6q/production/7da3bc8a946cfb5df15d7fcf49767faedc72b483-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Maven Central Adds Sigstore Signature Validation
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
github.com/observiq/bindplane-otel-collector/receiver/splunksearchapireceiver
This receiver collects Splunk events using the Splunk Search API.
Unlike other receivers, the SSAPI receiver is not built to collect live data. Instead, it collects a finite set of historical data and transfers it to a destination, preserving the timestamp from the source. For this reason, the SSAPI receiver only needs to be left running until all Splunk events have been migrated, which is denoted by the log message: "all search results exported". Until this log message or some other error is printed, avoid cancelling the collector for any reason, as it will unnecessarily interfere with the receiver's ability to protect against writing duplicate events.
Field | Type | Default | Description |
---|---|---|---|
endpoint | string | required (no default) | The endpoint of the splunk instance to collect from. |
splunk_username | string | (no default) | Specifies the username used to authenticate to Splunk using basic auth. |
splunk_password | string | (no default) | Specifies the password used to authenticate to Splunk using basic auth. |
auth_token | string | (no default) | Specifies the token used to authenticate to Splunk using token auth. Mutually exclusive with basic auth using splunk_username and splunk_password . |
token_type | string | (no default) | Specifies the type of token used to authenticate to Splunk using auth_token . Accepted values are "Bearer" or "Splunk". |
job_poll_interval | duration | 5s | The receiver uses an API call to determine if a search has completed. Specifies how long to wait between polling for search job completion. |
searches.query | string | required (no default) | The Splunk search to run to retrieve the desired events. Queries must start with search and should not contain additional commands, nor any time fields (e.g. earliesttime ) |
searches.earliest_time | string | required (no default) | The earliest timestamp to collect logs. Only logs that occurred at or after this timestamp will be collected. Must be in 'yyyy-MM-ddTHH:mm' format (UTC). |
searches.latest_time | string | required (no default) | The latest timestamp to collect logs. Only logs that occurred at or before this timestamp will be collected. Must be in 'yyyy-MM-ddTHH:mm' format (UTC). |
searches.event_batch_size | int | 100 | The amount of events to query from Splunk for a single request. |
storage | component | (no default) | The component ID of a storage extension which can be used when polling for logs . The storage extension prevents duplication of data after an exporter error by remembering which events were previously exported. This should be configured in all production environments. |
receivers:
splunksearchapi:
endpoint: "https://splunk-c4-0.example.localnet:8089"
tls:
insecure_skip_verify: true
splunk_username: "user"
splunk_password: "pass"
job_poll_interval: 5s
searches:
- query: 'search index=my_index'
earliest_time: "2024-11-01T01:00:00.000-05:00"
latest_time: "2024-11-30T23:59:59.999-05:00"
event_batch_size: 500
storage: file_storage
extensions:
file_storage:
directory: "./local/storage"
searches.query
you pass to the receiver.
search index=my_index
search
command, and must not include additional commands, nor any time fields (e.g. earliesttime
)searches.earliest_time
and searches.latest_time
config fields accordingly.
earliest_time: "2024-12-01T00:00:00.000-05:00"
latest_time: "2024-12-31T23:59:59.999-05:00"
googlecloud
exporter to your config. Configure the exporter to send to a GCP project where your service account has Logging Admin role. To check the permissions of service accounts in your project, go to the IAM page.sending_queue
field on the GCP exporter. The sending queue introduces an asynchronous step to the pipeline, which will jeopardize the receiver's ability to checkpoint correctly and recover from errors. For this same reason, avoid using any asynchronous processors (e.g., batch processor).After following these steps, your configuration should look something like this:
receivers:
splunksearchapi:
endpoint: "https://splunk-c4-0.example.localnet:8089"
tls:
insecure_skip_verify: true
splunk_username: "user"
splunk_password: "pass"
job_poll_interval: 5s
searches:
- query: 'search index=my_index'
earliest_time: "2024-12-01T00:00:00.000-05:00"
latest_time: "2024-12-31T23:59:59.999-05:00"
event_batch_size: 500
storage: file_storage
exporters:
googlecloud:
project: "my-gcp-project"
log:
default_log_name: "splunk-events"
sending_queue:
enabled: false
extensions:
file_storage:
directory: "./local/storage"
service:
extensions: [file_storage]
pipelines:
logs:
receivers: [splunksearchapi]
exporters: [googlecloud]
You are now ready to migrate events from Splunk to Google Cloud Logging.
FAQs
Unknown package
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
Security News
CISOs are racing to adopt AI for cybersecurity, but hurdles in budgets and governance may leave some falling behind in the fight against cyber threats.
Research
Security News
Socket researchers uncovered a backdoored typosquat of BoltDB in the Go ecosystem, exploiting Go Module Proxy caching to persist undetected for years.