Amazon VPC Monitoring – Frequently Asked Questions

Amazon VPC Monitoring – Frequently Asked Questions

Amazon VPC Monitoring – Frequently Asked Questions

This Knowledge Base article answers frequently asked questions about configuring and monitoring Amazon VPC using Applications Manager. It covers prerequisites, data collection issues, VPC Flow Log configuration, Network Address Usage (NAU) metrics, Athena and Glue integration, and AWS-related cost considerations.

Each FAQ below explains:

  • How to correctly enable required AWS features
  • Why do specific error messages appear in the VPC monitor
  • What kind of checks should be performed in AWS
  • Which AWS services may incur cost
  • What cleanup and retention actions must users configure
Why does data collection fail with UnauthorizedOperation or AccessDenied errors?

This error occurs when the AWS credentials configured for the Amazon VPC monitor do not have permission to invoke one or more required AWS APIs.

What errors will be displayed?

You may see messages similar to the following:

Data collection has failed. Reason: UnauthorizedOperation – You are not authorized to perform this operation…

User: {username} is not authorized to perform: ec2:DescribeSubnets


Why does this happen?
  • The IAM user or role configured for monitoring is missing required permissions.
  • Recent IAM policy changes were not applied to the credential.
  • An explicit deny exists in IAM policies or Service Control Policies (SCPs).

APIs that commonly trigger this error in the Amazon VPC monitor.

  • EC2
    • ec2:DescribeSubnets
    • ec2:DescribeNetworkInterfaces
    • ec2:DescribeFlowLogs
  • Glue
    • glue:GetDatabase
    • glue:GetTable
    • glue:GetPartition
    • glue:CreateDatabase
    • glue:CreateTable
    • glue:CreatePartition
  • Athena
    • athena:StartQueryExecution
    • athena:GetQueryExecution
    • athena:GetQueryResults

How to fix the issue

  1. Identify the AWS credential used by the VPC monitor.
  2. Open the AWS IAM Console.
  3. Locate the IAM user or role.
  4. Attach or update a policy that includes the missing API permissions.
  5. Check for explicit denies in IAM policies or SCPs.
  6. Save the changes and wait for the next poll cycle.
Tip:
  • Check if the prerequisites mentioned here are correctly met.
  • If you're using granular permissions, check this KB to ensure all the permissions are provided.
Summary
UnauthorizedOperation or AccessDenied errors indicate missing IAM permissions. Grant the required EC2, Glue, and Athena permissions to the monitoring credential and re-run data collection.
Why does data collection fail with AuthFailure – AWS was not able to validate the provided access credentials in Amazon VPC monitoring?

This error indicates that AWS could not authenticate the credentials configured for the Amazon VPC monitor. The access key, secret key, or role configuration may be invalid, expired, or incorrectly configured.

What error messages will be displayed?

You may see messages similar to the following:

Data collection has failed. Reason: AuthFailure – AWS was not able to validate the provided access credentials.


Why does this happen?
  • The access key or secret key is incorrect.
  • The IAM user was deleted or disabled.
  • The key has been rotated but not updated in the product.
  • The role trust relationship is misconfigured.
  • Temporary credentials (STS) have expired.

How to fix the issue

  1. Identify the AWS credentials provided for the Amazon VPC monitor.
  2. Open the AWS IAM Console.
  3. Verify that the access key is active and not deleted.
  4. Re-enter the access key and secret key in the monitor configuration.
  5. Wait for the next poll cycle or trigger a rediscovery.
Tip: Use the IAM Policy Simulator or AWS CLI command sts get-caller-identity to validate the credentials before retrying data collection.
Summary
AuthFailure errors indicate that AWS cannot authenticate the configured credentials. Ensure that the access keys or role configuration are valid, active, and correctly updated in the VPC monitor.

Why does data collection fail with a timeout error (“The server did not respond for more than 2 minutes”)?

This error indicates that Applications Manager did not receive a response from AWS within the expected time window. The delay can occur due to network conditions, proxy behavior, AWS API latency, or large environments that take longer to process.

What error will I see?

Messages similar to the following:

Data collection has failed. Reason: The server did not respond to the request for more than 2 minutes.


Why does this happen?
  • Network latency or intermittent connectivity to AWS endpoints
  • Proxy or firewall delays/blocks requests
  • AWS API service latency in the target region
  • Large VPCs with many ENIs or subnets cause longer processing times
  • High Athena query execution time during metric collection

How to troubleshoot and fix the issue

  1. Verify network connectivity from the monitoring server to AWS service endpoints.
  2. Check proxy or firewall configurations for blocked or delayed requests.
  3. Ensure that TLS inspection does not modify AWS responses.
  4. Confirm that the AWS region is reachable and operational.
  5. Review Athena query execution time in the AWS Console.
  6. Consider increasing the data collection interval for very large VPCs.
  7. Retry data collection after the next poll cycle.
Tip: If you are using a proxy, allowlist AWS service endpoints (EC2, Glue, Athena, and S3) to prevent request delays.
Summary
Timeout errors occur when AWS APIs or queries take too long to respond. Check network connectivity, proxy configuration, and Athena execution behavior, and tune polling intervals for large environments.
Why does data collection fail with “S3 Bucket configured for this VPC does not exist”?

This error occurs when the Amazon S3 bucket configured for VPC Flow Logs cannot be found or accessed by the configured monitoring setup. Without a valid bucket, Flow Logs cannot be delivered and performance metrics cannot be collected.

What error will I see?

Messages similar to the following:

Data collection has failed. Reason: S3 Bucket ({bucket-name}) configured for this VPC doesn't exist.


Why does this happen?
  • The S3 bucket was deleted or renamed.
  • The bucket name was entered incorrectly.
  • The bucket exists in a different AWS region.
  • The IAM role does not have permission to access the bucket.

How to troubleshoot and fix the issue

  1. Open the AWS Management Console.
  2. Navigate to S3 and verify the bucket name.
  3. Confirm the bucket exists in the same region as the VPC.
  4. If the bucket was deleted, recreate it or update the Flow Log configuration.
  5. Check the IAM role used for Flow Logs has access to the bucket.
  6. Verify the bucket policy does not block writes from VPC Flow Logs.
  7. Wait for the next poll cycle or trigger rediscovery.
Tip: Always create the Flow Log S3 bucket in the same region as the VPC and restrict access to only the required AWS service and prefix.
Summary
This error means the configured S3 bucket cannot be found or accessed. Verify the bucket exists, is in the correct region, and that IAM permissions allow Flow Logs to write to it.
Why does data collection fail with an invalid VPC Flow Log format?

This error occurs when VPC Flow Logs are enabled for the VPC, but the configured log record format does not match the required schema for parsing network traffic and performance metrics.

What error will I see?

Messages similar to the following:

Data collection has failed. Reason: Flow Log is enabled for this VPC {vpc-name}, but the configured flow log format is not in the expected format. Refer to this KB Article to know the expected format.


Why does this happen?
  • The Flow Log uses a custom format that does not match the expected schema.
  • One or more required fields are missing.
  • The field order was modified.
  • Logs are not delivered in plain text format.

Expected Flow Log record format

${version} ${account-id} ${interface-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status}

How to fix the issue

  1. Open the AWS Management Console.
  2. Navigate to VPC → Your VPCs.
  3. Select the affected VPC.
  4. Go to the Flow Logs tab.
  5. Edit the existing Flow Log configuration or create a new one if required.
  6. Update the Log format to match the expected schema exactly.
  7. Ensure that the log file format is set to text.
  8. Save the changes and wait for new logs to be delivered to S3.
Tip: After updating the format, wait for the next log delivery cycle and verify that new files appear in S3 before retrying data collection.
Summary
Invalid Flow Log format errors occur when the record schema does not match expectations. Update the format to include the required fields in the correct order.
Additional validation details shown in the monitor

When the Flow Log format does not match the expected schema, the monitor surfaces configuration details directly in the UI to help you troubleshoot without switching to the AWS Console.




Why does data collection fail when VPC Flow Logs use an unsupported destination type?

This error occurs when VPC Flow Logs are configured to send data to an unsupported destination type.

Supported monitoring requires Flow Logs to be delivered to an Amazon S3 bucket.

What error will I see?

Messages similar to the following:

Data collection has failed. Reason: Flow Logs for VPC {vpc-id} point to an unsupported destination type ({destination-type}). To continue monitoring, reconfigure the Flow Logs to send data to an Amazon S3 bucket.


Why does this happen?
  • Flow Logs are configured to send data to CloudWatch Logs.
  • Flow Logs are configured to send data to Kinesis Data Firehose.
  • The destination type was changed after the VPC monitor was created.

Supported destinations

Destination Type Supported
Amazon S3 Yes
CloudWatch Logs No
Kinesis Data Firehose No

How to fix the issue

  1. Sign in to the AWS Management Console.
  2. Navigate to VPC → Your VPCs.
  3. Select the affected VPC.
  4. Open the Flow Logs tab.
  5. Edit the existing Flow Log configuration.
  6. Change Destination to Send to an Amazon S3 bucket.
    Do not select CloudWatch Logs or Kinesis Data Firehose.
  7. Save the changes.
  8. Wait for new log files to appear in S3 and allow the next poll cycle to complete.
Tip: If multiple Flow Logs exist for the VPC, ensure that at least one VPC-level Flow Log is configured to deliver data to S3.
Summary
This error occurs when Flow Logs are configured to use unsupported destinations instead of Amazon S3. Reconfigure the Flow Logs to deliver data to an S3 bucket to resume monitoring.
How do I enable VPC Flow Logs for a VPC?
When will I see this error?

You will see the following message when VPC Flow Logs are not enabled or not enabled at the VPC level:



This indicates that network traffic and performance metrics cannot be collected until VPC Flow Logs are enabled at the VPC level.

How to enable VPC Flow Logs (required configuration)

  1. Sign in to the AWS Management Console.
  2. Navigate to VPC → Your VPCs.
  3. Select the target VPC.
  4. Open the Flow Logs tab.
  5. Click Create flow log.
  6. For Resource type, ensure VPC is selected.
  7. Select the Traffic type:
    • All (recommended)
    • Accept
    • Reject
  8. For Destination, choose Send to an Amazon S3 bucket.
    Do not select CloudWatch Logs.
  9. Specify the S3 bucket ARN where the Flow Logs should be delivered.
  10. Choose or create an IAM role (find more details below) with permissions to write logs to the S3 bucket.
  11. Under Log format, ensure:
    • Log file format is plain text
    • The log format exactly matches the expected schema below
    ${version} ${account-id} ${interface-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status}
  12. Ensure Flow Logs are partitioned by time:
    • Partition frequency: Every 24 hours (default AWS behavior)
  13. Click Create flow log to save the configuration.

IAM Role – How It Is Used

Purpose of the IAM role

The IAM role is used by the VPC Flow Logs service to write log files into the specified S3 bucket. This role is assumed by the VPC Flow Logs service and does not grant direct access to users.

How to use the IAM role

  1. During Flow Log creation, locate the IAM role option.
  2. Select an existing role or choose Create new IAM role.
  3. Ensure the role:
    • Allows writing objects to the target S3 bucket
    • Trusts the service vpc-flow-logs.amazonaws.com

Minimum IAM Permissions (Attach to the Role)

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::/AWSLogs//*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::"
}
]
}

Important Notes

  • Flow Logs must be enabled at the VPC level.
  • Logs must be delivered to an S3 bucket in the same region.
  • Only text format Flow Logs are supported.
  • The product does not delete any S3 data. Customers should configure S3 lifecycle policies as needed.
Summary
To successfully enable VPC monitoring, ensure that VPC Flow Logs are enabled at the VPC level, delivered to an S3 bucket in the same region, stored in text format, partitioned daily, and written using a correctly configured IAM role.

Why does the monitor say Flow Logs are not enabled even though I enabled them for a subnet or ENI?

This occurs when VPC Flow Logs are enabled only at the subnet or network interface (ENI) level, but not at the VPC level. The VPC monitor requires a VPC-level Flow Log configuration to collect traffic and performance metrics across the entire VPC.

What will I see in the monitor?
  • A message stating that VPC Flow Logs are not enabled
  • No traffic or performance metrics in graphs
  • Configuration status showing Flow Logs as disabled


Why is VPC-level Flow Log required?
  • Subnet- or ENI-level Flow Logs cover only individual resources.
  • VPC-level Flow Logs capture traffic for all subnets and ENIs.
  • The monitor depends on a single VPC-level S3 data source for analytics.

How to verify in AWS

  1. Open the AWS Management Console.
  2. Go to VPC → Your VPCs.
  3. Select the affected VPC.
  4. Open the Flow Logs tab.
  5. Check whether there is a Flow Log entry where:
    • Resource type: VPC
    • Destination: Amazon S3

How to fix the issue

  1. If no VPC-level Flow Log exists, click Create flow log.
  2. Select VPC as the Resource type.
  3. Choose Send to an Amazon S3 bucket as the destination.
  4. Specify the S3 bucket ARN.
  5. Configure the required log format.
  6. Save the Flow Log configuration.
Important:
Subnet-level or ENI-level Flow Logs alone are not sufficient. At least one Flow Log must be enabled at the VPC level and delivered to an Amazon S3 bucket for monitoring to work.
Summary
If Flow Logs are enabled only for subnets or ENIs, the monitor will still report them as disabled. Enable a VPC-level Flow Log with an S3 destination to begin collecting metrics.
Why does the VPC monitor remain unhealthy when a subnet or ENI is deleted in AWS?

When a subnet or Elastic Network Interface (ENI) is deleted in AWS, the VPC monitor continues to retain the resource details for a short duration to avoid transient discovery issues and false alerts.

Default behavior
  • Deleted subnets or ENIs are retained for 1 consecutive poll by default.
  • During this period, the monitor health may be affected and corresponding alerts can be triggered.
  • If the resource is not rediscovered in the next poll, it is removed automatically.
What you will see in the monitor
  • An alert or health degradation for the deleted subnet or ENI
  • Status persists for one poll cycle before cleanup


How to change the persistence behavior

You can control how long deleted subnets or ENIs remain in the monitor before being removed.

  1. Go to Settings → Discovery and Data Collection.
  2. Open Performance Polling.
  3. Click Optimize Data Collection.
  4. Choose:
    • Monitor Type: Amazon Virtual Private Cloud (VPC
    • Metric Name: SubnetDetails or ENIDetails
  5. Locate the option: Delete entry if it is not discovered {number} consecutive polls.
  6. Enable or disable the option and adjust the poll count based on your requirement.
  7. Save the changes.

How this affects alerts and data persistence
  • Lower poll counts remove deleted resources faster but may increase transient alerting.
  • Higher poll counts retain deleted resources longer and may continue to affect health and alerts.
  • Choose values based on how frequently your environment changes.
Summary
By default, deleted subnets or ENIs affect monitor health for one poll cycle. You can control how long these entries persist by adjusting the Delete entry if it is not discovered setting under Performance Polling.

How do I enable Network Address Usage (NAU) metrics for a VPC?
When will I see this message?
  • On the Configuration tab, the Network Address Usage Settings status is set to Disabled.
  • Under the Performance Overview tab, an informational note is displayed with steps to enable NAU metrics in AWS.

How to enable Network Address Usage (NAU) metrics

  1. Sign in to the AWS Management Console.
  2. Navigate to VPC → Your VPCs.
  3. Select the target VPC.
  4. Click Actions → Edit VPC settings.
  5. Enable the Network Address Usage metrics option.
  6. Click Save changes.
(Refer to the screenshots below for more details)

Data availability & polling behavior
  • NAU metrics are published by AWS once every 24 hours.
  • The recommended default and minimum polling interval for NAU metrics is 12 hours.
  • Once enabled, NAU metrics are automatically mapped under Performance Polling.
Key takeaway:
Network Address Usage metrics must be enabled at the AWS VPC console. After enabling, the Configuration and Performance Overview tabs will reflect the change, and data will appear once AWS publishes the metrics.

How can I monitor Network Address Usage and Peered Network Address Usage efficiently to avoid hitting capacity limits?

Network Address Usage (NAU) metrics published by AWS represent the maximum number of Network Address Usage units that can be consumed within a VPC. These limits protect VPCs from exhausting IP-related resources and are enforced using AWS service quotas.

Where can I see this in the monitor?
  • In the Performance Overview section as time-series graphs.
  • In the Network Address Usage and Peered Network Address Usage charts.
  • Current values are displayed below the graph for quick assessment.

What are NAU units?

NAU units are an AWS-internal capacity measure used to track how many IP-address-consuming resources exist within a VPC. Different AWS resources consume different numbers of NAU units.

Examples of resources that contribute to NAU units include:
  • Elastic Network Interfaces (ENIs)
  • EC2 instances
  • NAT Gateways
  • Load balancers
  • VPC endpoints
  • Lambda functions configured inside a VPC

The exact NAU unit consumption varies by resource type and AWS manages these values internally.

What limits should I be aware of?

  • A single VPC supports up to 64,000 NAU units by default.
  • This can be increased up to 256,000 by requesting a service quota increase.
  • For VPCs peered within the same Region, the combined limit is 128,000 NAU units (increaseable to 512,000).
  • Inter-region VPC peering does not contribute to this combined limit.

How should I use these values operationally?

  • Track growth trends over time in the charts.
  • Manually compare current values against the AWS limits.
  • Review NAU before large-scale provisioning or subnet expansion.
  • Monitor Peered NAU when adding same-region peering connections.
  • Request AWS quota increases in advance if values approach the limits.
Tip: Check the AWS Service Quotas console under the VPC service to view your account’s configured NAU limits and any approved increases.
Summary
The monitor displays AWS-reported NAU unit values. These metrics, combined with alerting and quota planning, help prevent IP exhaustion and scaling failures. Use the monitored values alongside AWS Service Quotas to proactively manage VPC capacity.
Why did the Athena query or partition creation fail after multiple retries?

This error appears when an Amazon Athena query — including partition-creation queries — does not complete successfully after the configured retry attempts.

Typical error message:

Athena query {0} did not succeed after maximum retry attempts ({1}). 
Reason: {2}

Where:

  • {0} — Athena Query Execution ID
  • {1} — Maximum retry attempts configured by the monitor
  • {2} — Last failure reason returned by Athena

How ApplicationsManager handles Athena query execution

After submitting a query, the monitor repeatedly polls Athena for its status:

  • Checks whether the query is in QUEUED or RUNNING state.
  • Waits a configured interval between checks.
  • Retries until the retry limit is reached.
  • Fails the poll cycle if the query enters FAILED or CANCELLED state, or does not complete in time.

Partition-creation queries (ALTER TABLE ADD PARTITION) are also validated using this same retry logic.


Common reasons for Athena query or partition failures

  • Insufficient IAM permissions for Athena or Glue operations
  • S3 bucket or output location not accessible
  • Missing or invalid Glue table/partition definitions
  • Flow Log data not present for the requested time window
  • Athena concurrency or service quota limits
  • Network or proxy issues between ApplicationsManager and AWS APIs
  • Invalid SQL due to schema mismatch or corrupted logs

What customers should verify in AWS

  1. Open the AWS Athena Console and locate the Query Execution ID shown in the error.
  2. Review the Query details and failure message.
  3. Confirm that the configured Athena output S3 location exists and is writable.
  4. Check the Glue database and table definitions.
  5. Verify that the IAM role has permissions for:
    • athena:StartQueryExecution
    • athena:GetQueryExecution
    • athena:GetQueryResults
    • glue:GetDatabase / CreateDatabase
    • glue:GetTable / CreateTable
    • glue:GetPartition / CreatePartition
  6. Review Athena service quotas (concurrent queries, data scanned).
  7. Check S3 lifecycle rules to ensure logs were not deleted too early.

Does the tool retry automatically?

Yes. The monitor retries Athena query-status checks up to the configured retry limit and waits between attempts before marking the poll as failed.

If the query remains in progress too long or fails, the poll cycle ends and the error is displayed in the monitor.


What is managed by the monitoring tool vs AWS?

Handled by ApplicationsManager Customer must manage in AWS
  • Retrying query status
  • Reporting failures
  • Logging Athena errors
  • Health updates
  • IAM permissions
  • S3 access and retention
  • Glue schema health
  • Athena quotas
  • Network/proxy rules

Important
If Athena queries or partition creation fail repeatedly, review the Athena console using the Query Execution ID and verify IAM permissions, S3 access, Glue metadata, and service quotas before retrying data collection.

Summary

This error indicates that Athena did not complete a query — including partition creation — within the allowed retry attempts. Customers should inspect the Athena query execution details in AWS and validate IAM permissions, S3 locations, Glue configuration, and Athena quotas.

Why did Glue partition creation fail for my VPC Flow Logs table?

This error appears when ApplicationsManager attempts to create a daily AWS Glue partition for VPC Flow Logs and the operation does not succeed.

Typical error message:

Partition creation failed for date {0} in table {1} and database {2}.

Where:

  • {0} — Partition date (for example: 2026-01-26)
  • {1} — Glue table name
  • {2} — Glue database name

Why does the monitor create partitions?

VPC Flow Logs are partitioned by date (YYYY-MM-DD) so that Athena scans only the required data range. This keeps queries fast and reduces AWS cost.

If a partition for the current day does not exist, the monitor attempts to create it automatically before running queries.


Common reasons for partition creation failure

  • Missing IAM permissions for Glue or Athena
  • S3 location for the partition does not exist or is inaccessible
  • Incorrect bucket name or prefix
  • Glue database or table does not exist
  • Flow Logs have not yet been delivered for the day
  • Region mismatch between VPC, S3 bucket, and Glue/Athena
  • Service quota limits or throttling

What should I check in AWS? (Step-by-step)

  1. Open the AWS Glue Console and verify that the database and table exist.
  2. Confirm the table’s partition key includes the DATE column.
  3. Check that the S3 path for the date partition exists:
    s3://<bucket-name>/AWSLogs/<account-id>/vpcflowlogs/<region>/YYYY/MM/DD/
  4. Open the Athena Console and review the failed query details.
  5. Verify that the IAM role has permissions for:
    • glue:GetDatabase
    • glue:GetTable
    • glue:GetPartition
    • glue:CreatePartition
    • athena:StartQueryExecution
  6. Ensure the S3 bucket policy allows Glue and Athena access.
  7. Confirm that Flow Logs are enabled at the VPC level and delivered to S3.

What is handled by ApplicationsManager vs AWS?

Handled by ApplicationsManager Customer must manage in AWS
  • Detect missing partitions
  • Submit partition-creation query
  • Surface error messages
  • IAM roles & policies
  • S3 bucket structure
  • Glue schema
  • AWS quotas
  • Region alignment

Important
Partition creation failures usually indicate permission, S3 path, or Glue configuration problems. Reviewing the Athena query details and Glue table settings in AWS will reveal the exact cause.

Summary

This error means the daily Glue partition for VPC Flow Logs could not be created. Verify IAM permissions, Glue schema, S3 directory structure, and Flow Log delivery to resolve the issue.

What cleanup and retention should I configure in AWS for VPC monitoring?

Applications Manager does not delete any AWS resources or data generated as part of VPC monitoring. This includes VPC Flow Logs stored in Amazon S3, Athena query results, and AWS Glue metadata.

Customers are responsible for configuring appropriate data retention and cleanup policies on the AWS side to control storage growth and cost.

What data can accumulate over time?
  • VPC Flow Log files delivered to S3
  • Athena query output files
  • AWS Glue Data Catalog databases, tables, and partitions

Configure Amazon S3 lifecycle rules

To control storage usage, configure Amazon S3 Lifecycle policies on:

  • The S3 prefix containing Flow Logs
  • The Athena query output prefix

Recommended actions:

  • Transition older logs to cheaper storage tiers (Glacier/Deep Archive).
  • Expire objects after your compliance-approved retention period.
  • Align retention with your monitoring poll interval and audit requirements.

Manage Athena query result files

Athena stores query outputs in the configured S3 location. These files can accumulate quickly if not cleaned.

  • Apply lifecycle rules to the Athena output prefix.
  • Use short retention (for example, 7–30 days) unless auditing requires more.

Glue catalog and partition considerations

Glue databases, tables, and partitions are metadata objects and do not consume significant storage, but excessive unused partitions can affect query planning and maintenance.

  • Ensure partitions are created only for active days.
  • Remove obsolete tables when VPC monitoring is permanently disabled.
Important:
The monitoring tool does not automatically delete S3 objects or AWS metadata. Always configure retention policies directly in AWS to avoid unexpected storage growth or cost.

Best practices for long-term operation

  • Review the S3 bucket size periodically in the AWS  Management Console.
  • Monitor Athena scan volumes in AWS Cost Explorer.
  • Align log retention with compliance requirements.
  • Document lifecycle policies for audit readiness.
Summary
Since the tool does not perform cleanup, customers must configure S3 lifecycle rules and retention policies for Flow Logs and Athena outputs. Proper housekeeping prevents uncontrolled storage growth and keeps AWS costs predictable.
How can I estimate my monthly AWS expense before enabling VPC monitoring?

This article explains the AWS-side costs incurred when enabling Amazon VPC monitoring using VPC Flow Logs delivered to Amazon S3 and analyzed using Amazon Athena and AWS Glue.

Costs vary significantly based on the traffic volume, number of ENIs, polling interval, retention period, region, and AWS pricing tiers. All examples below are illustrative and should be validated using the AWS Pricing Calculator and your AWS billing dashboard.


AWS Services that generate charges

  • VPC Flow Logs – Log delivery/ingestion volume.
  • Amazon S3 – Storage of Flow Logs and Athena outputs.
  • Amazon Athena – Charged per TB of data scanned.
  • AWS Glue – Data Catalog metadata, crawlers, or ETL jobs.
  • AWS API usage – Control-plane calls (usually negligible).

How the VPC monitor uses Athena

For every poll cycle, the monitor executes the following:

  • Three Athena queries per ENI to compute traffic and performance metrics.
  • An ALTER TABLE ADD PARTITION query per day (if the partition does not exist).

Default performance polling interval: 15 minutes (96 polls per day).


Primary AWS cost drivers

1. VPC Flow Logs delivery

AWS charges for the volume of Flow Logs delivered. The exact pricing varies by region and AWS billing tier.

Main driver: Network traffic inside the VPC.

2. Amazon S3 storage

VPC Flow Logs and Athena query outputs are stored in Amazon S3. Charges depend on:

  • Daily log volume
  • Retention period
  • Storage class (Standard, Glacier, etc.)

Customers are strongly advised to configure S3 Lifecycle policies to expire or archive older logs.

3. Amazon Athena scans

Athena pricing is based on data scanned, not query count. Typical pricing is approximately $5/TB scanned (region dependent).

4. AWS Glue

AWS Glue Data Catalog objects usually incur minimal cost. Glue crawlers or ETL jobs, if enabled, incur DPU-hour charges.

5. AWS API usage

Describe APIs and control-plane calls generally have negligible cost.

6. Data transfer

This applies only when logs or queries cross regions or leave AWS.


Example scenario – Medium-sized VPC

  • 50 ENIs
  • Polling every 15 minutes
  • Logs partitioned by date
  • Estimated scan per query: 5 MB
  • Flow Logs generated: 10 GB/day
  • Retention: 90 days

Athena query volume

  • Queries per poll: 50 × 3 = 150
  • Queries per day: 150 × 96 = 14,400

Estimated Athena scan

  • Daily scan ≈ 72 GB
  • Monthly scan ≈ 2.1 TB

Athena cost estimate

  • 2.1 TB × $5/TB = $10.50/month

S3 storage estimate

  • Monthly ingestion: 300 GB
  • Steady-state (90 days): ~900 GB
  • S3 Standard example: $0.025/GB → ~$22.50/month

Illustrative monthly total

Component Estimate
Athena scans $10–15
S3 storage $20–25
Flow Log delivery Traffic-dependent
Glue / APIs Minimal

What can significantly increase cost?

  • Hundreds of ENIs
  • Short polling intervals
  • High east-west traffic
  • No S3 lifecycle policies
  • Poor partitioning
  • Broad Athena scans

Customer responsibilities

  • Configure S3 lifecycle policies
  • Monitor Athena scan volume
  • Track VPC Flow Log ingestion
  • Review Glue partitions periodically
  • Adjust polling interval when required

Important Disclaimer
All cost values shown here are illustrative examples only. AWS pricing differs by region and may change over time. Always validate estimates using AWS official pricing pages, the AWS Pricing Calculator, and Cost Explorer.

Still facing issues?

If you continue to experience issues after following the steps in these FAQs, review the detailed error message shown in the monitor and cross-check the corresponding AWS service configuration.

For persistent failures, collect:

  • The Logs
  • The exact error message displayed
  • Affected VPC ID
  • Athena Query Execution ID (if shown)
  • S3 bucket name and region
  • Glue database and table names

Then contact ApplicationsManager support with this information for faster resolution.

Reminder
AWS pricing, quotas, and service limits change over time. Always validate costs and limits using the AWS Console, Cost Explorer, and Service Quotas dashboard.

Keeping VPC Flow Logs correctly configured, partitions optimized, and retention policies in place will ensure stable monitoring, predictable AWS costs, and accurate visibility into your VPC environment.


                  New to ADSelfService Plus?

                    • Related Articles

                    • Amazon VPC Monitoring – Prerequisites and Troubleshooting Guide

                      This document outlines the prerequisites, required AWS IAM permissions, and configuration steps needed to enable Amazon VPC monitoring using VPC Flow Logs, Amazon Athena, and AWS Glue in Applications Manager. It also provides troubleshooting guidance ...
                    • End User Monitoring (EUM) - Frequently asked questions

                      How to access EUM Agent Settings page? Unable to Login End User Monitoring Agent? EUM Agent not mapped to Applications Manager. What should I do? How to Change Applications Manager hostname and port in EUM Agent for login? How to change AppManager ...
                    • Real User Monitoring Agent - Frequently asked questions

                      How to change Applications Manager details in RUM Agent? Open the AppServer.properties file located under <RUMAgent_Home>/conf/ directory in any text editor. Update Applications Manager hostname in apm.host key. Update Applications Manager SSL Port ...
                    • Frequently asked questions - SAP monitor

                      1. SAP monitor's Health is always critical/warning state along with "System Error occurred" alert message. What should I do? These alerts are triggered when Applications manager receives Alerts from the SAP server itself. The same can be viewed under ...
                    • Adding a new Weblogic monitor: Frequently Asked Questions

                      Question: What are the minimum security privileges that the user account for the WebLogic server should possess for monitoring to take place? Answer: The user should have Administrative privileges. Applications Manager Webserver port 9090 should be ...