It’s criminally easy to roll out a fear-mongering list of industries, victims, and financial penalties related to ransomware. Gas pipelines, healthcare systems, local governments, all have been hit. Nearly every headline is some form of: “And it’s only getting worseare Ransomware attacks the new digital pandemic?”  It can seem inevitable, but when it comes to your AWS environment, there’s a few things you can do to protect yourself by reducing your attack surface, and then use Prowler to keep an eye on it all continuously. 

AWS Ransomware Best Practices

Ransomware attacks are only successful when you don’t have backups of your data so the attacker can hold your data hostage. Reducing your attack surface and putting consistent data backup/recovery processes in place will help you thwart malicious activity (and recover from application failures as well).

Implement IAM Best Practices 

These include setting least privilege policies, preventing IAM key leakage, applying policies only at the group level, and more. See our previous post on IAM checks in Prowler for all the details on this.

Enable S3 Object Versioning 

Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning you can recover more easily from both unintended user actions and application failures. 

Replicate S3 Buckets 

AWS offers a built-in mechanism for replicating buckets to different S3 buckets for backup purposes, including mitigating malicious delete operations.

Prevent Deletion with S3 Object Lock 

Per AWS, Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use S3 Object Lock to meet regulatory requirements that require WORM storage, or add an extra layer of protection against object changes and deletion.

Use GuardDuty S3 Findings 

GuardDuty monitors and generates findings for suspicious access to data stored in your S3 buckets.

Prowler Ransomware Checks

Running a Prowler check is quick and easy. The basic command is prowler, and if you run it without options it will use your environment variable credentials (if they exist) or will default to using the ~/.aws/credentials file and run checks over all regions when needed. To install prowler just make sure you have Python 3.9 or newer and PIP then pip install prowler.

To run a single check, use option -c and the check ID:

prowler aws -c cloudtrail_logs_s3_bucket_is_not_publicly_accessible

For multiple checks, separate them with a comma: 

prowler aws -c cloudtrail_logs_s3_bucket_is_not_publicly_accessible \
           ec2_ebs_public_snapshot \
           s3_bucket_public_access

Check out the Prowler docs for the full usage details and tutorials. 

Check for open common ports

  • SSH access via EC2 Security Group (Server-level control)
  • RDP access via EC2 Security Group (Server-level control)
  • SSH access via Network ACL (Subnet-level control)
  • Microsoft RDP via Network ACL
  • FTP ports 20 or 21
  • Kafka port 9092
  • Telnet port 23
  • Windows SQL Server ports 1433 or 1434
  • Network ACLs ingress from 0.0.0.0/0 to any port
  • Security groups ingress from 0.0.0.0/0 or ::/0 to any port
  • Oracle ports 1521 or 2483
  • MySQL port 3306
  • Postgres port 5432
  • Redis port 6379
  • MongoDB ports 27017 and 27018
  • Cassandra ports 7199 or 9160 or 8888
  • Memcached port 11211
  • Elasticsearch/Kibana ports

Check now if you have any of those ports open to the internet with:

prowler aws -c ec2_networkacl_allow_ingress_any_port \
ec2_networkacl_allow_ingress_tcp_port_22 \
ec2_networkacl_allow_ingress_tcp_port_3389 \
ec2_securitygroup_allow_ingress_from_internet_to_any_port \
ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_elasticsearch_kibana_9200_9300_5601 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_kafka_9092 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_memcached_11211 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mysql_3306 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_oracle_1521_2483 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_postgres_5432 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_redis_6379 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_sql_server_1433_1434 \
ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_telnet_23

Internet-exposed Resources

The list of things you don’t want exposed to the internet is pretty significant. Thankfully, Prowler has you covered with these checks for resources that could be set as public:

  • EBS Snapshots
  • EC2 AMIs
  • ECR repositories
  • RDS instances
  • Elastic Load Balancers 
  • EC2 Instances
  • EC2 instances with Instance Profiles attached
  • Redshift Clusters
  • Elasticsearch Service (ES) domains (or if it has open policy access)
  • RDS and Cluster Snapshots
  • SQS queues policy 
  • SNS topics policy 
  • API Gateway endpoint
  • Exposed KMS keys
  • S3 bucket for CloudTrail logs: Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected accounts use or configuration.
  • Lambda functions’ resource-based policies

Check now if you have internet exposed resources with:

prowler aws -c ec2_ebs_public_snapshot \
           ec2_ami_public \
           ecr_repositories_not_publicly_accessible \
           rds_instance_no_public_access \
           elb_internet_facing \
           elbv2_internet_facing \
           ec2_instance_public_ip \
           ec2_instance_internet_facing_with_instance_profile \
           redshift_cluster_public_access \
           opensearch_service_domains_not_publicly_accessible \
           rds_snapshots_public_access \
           sqs_queues_not_publicly_accessible \
           sns_topics_not_publicly_accessible \
           apigateway_endpoint_public \
           kms_key_not_publicly_accessible \
           cloudtrail_logs_s3_bucket_is_not_publicly_accessible \
           awslambda_function_not_publicly_accessible

There’s a few other useful checks in this set: 

  • Are CloudFront distributions set to HTTPS
  • S3 buckets that are open to Everyone or Any AWS user
  • S3 buckets which allow WRITE access
  • Ensure a log metric filter and alarm exist for S3 bucket policy changes: Monitoring unauthorized API calls will help reveal application errors and detect malicious activity.
  • Do S3 buckets have Object-level logging enabled in CloudTrail: You can’t use logs for threat analysis if they don’t exist! 
  • Do S3 buckets have default encryption (SSE) enabled: Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. This will ensure data-at-rest is encrypted.
  • Check if EFS File systems have backup enabled
  • Ensure EKS Clusters are created with Private Endpoint Enabled and Public Access Disabled
  • Find VPC security groups with wide-open public IPv4 CIDR ranges 
  • Restrict Access to the EKS Control Plane Endpoint
  • Check if any of the Elastic or Public IP are in Shodan 
  • Check connection and authentication for both:
    • Internet exposed Elasticsearch/Kibana ports 
    • Internet exposed Amazon Elasticsearch Service (ES) domains 

Check all these with:

prowler aws -c cloudfront_distributions_https_enabled \
           s3_bucket_public_access \
           s3_bucket_policy_public_write_access \
           cloudwatch_log_metric_filter_for_s3_bucket_policy_changes \
           cloudtrail_s3_dataevents_write_enabled \
           s3_bucket_default_encryption \
           cloudfront_distributions_logging_enabled \
           eks_endpoints_not_publicly_accessible \
           ec2_securitygroup_allow_wide_open_public_ipv4 \
           eks_control_plane_endpoint_access_restricted \
           ec2_instance_public_ip

If you have a Shodan.io API key, add this at the end:

--shodan <shodan_api_key>

RDS Checks

  • Publicly accessible RDS instances: Publicly accessible databases could expose sensitive data to bad actors—check if they exist, and if so, confirm there is a legitimate business reason.
  • Are RDS Snapshots or Cluster Snapshots public: If your RDS snapshot is public then the data which is backed up in that snapshot is accessible to all other AWS accounts.
  • Is storage encrypted: Use a CMK where possible, which will provide additional management and privacy benefits.
  • Is automated backup enabled: Be sure you have automated backup established for production data, with a clearly defined retention period. 
  • Are RDS instances integrated with CloudWatch logs: These logs help you monitor how your services are being used and assist with threat analysis when needed.
  • Is deletion protection enabled: If not, you can set it up in your AWS management console for any of your production instances.
  • Is minor version upgrade enabled: Auto Minor Version Upgrade does pretty much what it says: it automatically upgrades when a new minor database engine version is available. Such minor version upgrades often patch security vulnerabilities and fix bugs.
  • Is enhanced monitoring enabled: First you need to create an IAM role and then you can enable Enhanced Monitoring, which uses a smaller monitoring interval for more frequent reporting of OS metrics.
  • Is multi-AZ enabled: With a single-AZ deployment configuration, Amazon RDS can’t automatically fail over to a standby availability zone.

Check  RDS now with:

prowler aws --service rds

Stay tuned for the next post in this series!


Sign up for Prowler Training

This free course covers everything from the history of Prowler to advanced features.


Toni de la Fuente

Founder of Prowler Open Source & Lead of Prowler Pro

I’m founder of Prowler Open Source, tool for AWS security best practices. I also worked for AWS as security engineer and security consultant. I’m passionate about FLOSS (Free Libre Open Source Software) in general and Information Security, Incident Response and Digital Forensics in particular. I like everything related to cloud computing and automation. I have done some things for security and the Open Source community like Prowler, phpRADmin, Nagios plugin for Alfresco, Alfresco BART (backup tool). I’ve also contributed in books and courses related to Linux, Monitoring and AWS Security for PacktPublishing.