CIS AWS Foundations Benchmark v5.0.0 - Fine-Tuned Llama 3.2 3B

This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct specialized in AWS Cloud Security and Compliance. It has been trained on the CIS Amazon Web Services Foundations Benchmark v5.0.0 (released March 31, 2025).

Model Description

This model acts as a specialized security assistant for AWS environments. It possesses deep knowledge of the consensus-based best practices for securing Amazon Web Services accounts and resources. The fine-tuning process utilized the specific guidance found in the CIS AWS Foundations Benchmark v5.0.0.

Key Knowledge Areas

The model is trained to understand, audit, and provide remediation steps for the following domains covered in the benchmark:

  • [cite_start]Identity and Access Management (IAM): Configuring root accounts, password policies, MFA, access keys, and roles[cite: 2912].
  • [cite_start]Storage: Securing S3 buckets (blocking public access, encryption), RDS instances, and EFS encryption[cite: 2914].
  • [cite_start]Logging: Configuration of CloudTrail, AWS Config, and S3 server access logging[cite: 2914].
  • [cite_start]Monitoring: Setting up CloudWatch alarms for unauthorized API calls, sign-in failures, and changes to network gateways/security groups[cite: 2914, 2916].
  • [cite_start]Networking: Securing VPCs, Security Groups, NACLs, and EC2 instance metadata services (IMDSv2)[cite: 2916].

Intended Uses & Capabilities

This model is designed for DevSecOps engineers, Cloud Architects, and Security Auditors who need quick access to compliance rules and remediation scripts.

Use cases include:

  • Q&A on Compliance: Ask specific questions like "How do I ensure CloudTrail is enabled in all regions?" or "What is the remediation for unauthorized API calls monitoring?".
  • Remediation Steps: Generate CLI commands or Console steps to fix security findings.
  • Audit Procedures: Retrieve the specific commands to verify if a resource is compliant with CIS v5.0.0.

How to Use

To use this model, you need to load the base model and the LoRA adapter.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model_id = "meta-llama/Llama-3.2-3B-Instruct"
adapter_model_id = "halencarjunior/cis_aws_foundation_benchmark_5_0"

# 1. Load Base Model
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

# 2. Load Adapter
model = PeftModel.from_pretrained(base_model, adapter_model_id)

# 3. Inference
messages = [
    {"role": "user", "content": "How do I ensure that S3 buckets are configured with 'Block Public Access' enabled according to CIS v5.0?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
20
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for halencarjunior/cis_aws_foundation_benchmark_5_0

Adapter
(586)
this model

Dataset used to train halencarjunior/cis_aws_foundation_benchmark_5_0

Evaluation results