// Commands data const commandsData = { "ai-tools": [ { "name": "audit-local-ai-packages", "category": "ai-tools", "description": "Evaluate local AI inference packages and suggest additions", "tags": [ "ai", "ml", "inference", "packages", "recommendations", "project", "gitignored" ], "content": "You are helping the user evaluate their local AI inference setup and suggest packages to install.\n\n## Process\n\n1. **Check currently installed AI/ML packages**\n\n **Python packages:**\n - `pip list | grep -E \"torch|tensorflow|transformers|diffusers|onnx\"`\n\n **System packages:**\n - `dpkg -l | grep -E \"rocm|cuda|python3-\"`\n\n **Conda environments:**\n - `conda env list` (if conda is installed)\n\n **Standalone tools:**\n - Check for: Ollama, ComfyUI, LocalAI, text-generation-webui\n - Check `~/programs/ai-ml/`\n\n2. **Assess hardware configuration**\n - GPU: `rocm-smi` or `nvidia-smi`\n - RAM: `free -h`\n - Storage: `df -h`\n - CPU capabilities: `lscpu | grep -E \"Model name|Thread|Core\"`\n\n3. **Categorize AI inference needs**\n\n **LLM Inference:**\n - Ollama (already covered)\n - llama.cpp\n - vllm\n - text-generation-webui (oobabooga)\n - LocalAI\n\n **Image Generation:**\n - ComfyUI (already covered)\n - AUTOMATIC1111/stable-diffusion-webui\n - InvokeAI\n - Fooocus\n\n **Audio/Speech:**\n - Whisper (speech-to-text)\n - Coqui TTS\n - Bark\n - MusicGen\n\n **Video:**\n - AnimateDiff\n - Video generation models\n\n **Code:**\n - Continue.dev\n - Tabby (local copilot)\n - Aider\n\n **Vector DB / RAG:**\n - ChromaDB\n - Qdrant\n - FAISS\n - LangChain\n\n4. **Check Python ML libraries**\n - PyTorch (with ROCm/CUDA)\n - TensorFlow\n - transformers (Hugging Face)\n - diffusers\n - accelerate\n - bitsandbytes (quantization)\n - ONNX Runtime\n - optimum\n\n5. **Suggest based on gaps**\n - Identify what's missing for common workflows\n - Prioritize based on hardware capabilities\n - Consider ease of use vs. flexibility\n\n6. **Installation recommendations**\n - Provide commands for suggested packages\n - Recommend conda environments for isolation\n - Suggest Docker containers for complex setups\n\n## Output\n\nProvide a report showing:\n- Currently installed AI/ML packages by category\n- Hardware capability summary\n- Recommended packages to install based on:\n - User's hardware\n - Current gaps in capabilities\n - Popular/useful tools\n- Installation commands for each suggestion\n- Notes on hardware requirements", "filepath": "ai-tools/audit-local-ai-packages.md" }, { "name": "gpu-ai-ml-assessment", "category": "ai-tools", "description": "", "tags": [], "content": "You are assessing GPU driver status and AI/ML workload capabilities.\n\n## Your Task\n\nEvaluate the GPU's driver configuration and suitability for AI/ML workloads, including deep learning frameworks, compute capabilities, and performance optimization.\n\n### 1. Driver Status Assessment\n- **Installed driver**: Type (proprietary/open-source) and version\n- **Driver source**: Distribution package, vendor installer, or compiled\n- **Driver status**: Loaded, functioning, errors\n- **Kernel module**: Module name and status\n- **Driver age**: Release date and recency\n- **Latest driver**: Compare installed vs. available\n- **Driver compatibility**: Kernel version compatibility\n- **Secure boot status**: Impact on driver loading\n\n### 2. Compute Framework Support\n- **CUDA availability**: CUDA Toolkit installation status\n- **CUDA version**: Installed CUDA version\n- **CUDA compatibility**: GPU compute capability vs. CUDA requirements\n- **ROCm availability**: For AMD GPUs\n- **ROCm version**: Installed ROCm version\n- **OpenCL support**: OpenCL runtime and version\n- **oneAPI**: Intel oneAPI toolkit status\n- **Framework libraries**: cuDNN, cuBLAS, TensorRT, etc.\n\n### 3. GPU Compute Capabilities\n- **Compute capability**: NVIDIA CUDA compute version (e.g., 8.6, 8.9)\n- **Architecture suitability**: Architecture generation for AI/ML\n- **Tensor cores**: Presence and version (Gen 1/2/3/4)\n- **RT cores**: Ray tracing acceleration (less relevant for ML)\n- **Memory bandwidth**: Critical for ML workloads\n- **VRAM capacity**: Memory size for model loading\n- **FP64/FP32/FP16/INT8**: Precision support\n- **TF32**: Tensor Float 32 support (Ampere+)\n- **Mixed precision**: Automatic mixed precision capability\n\n### 4. Deep Learning Framework Compatibility\n- **PyTorch**: Installation status and CUDA/ROCm support\n- **TensorFlow**: Installation and GPU backend\n- **JAX**: Google JAX framework support\n- **ONNX Runtime**: ONNX with GPU acceleration\n- **MXNet**: Apache MXNet support\n- **Hugging Face**: Transformers library GPU support\n- **Framework versions**: Installed versions and compatibility\n\n### 5. AI/ML Library Ecosystem\n- **cuDNN**: NVIDIA Deep Neural Network library\n- **cuBLAS**: CUDA Basic Linear Algebra Subprograms\n- **TensorRT**: High-performance deep learning inference\n- **NCCL**: NVIDIA Collective Communications Library (multi-GPU)\n- **MIOpen**: AMD GPU-accelerated primitives\n- **rocBLAS**: AMD GPU BLAS library\n- **oneDNN**: Intel Deep Neural Network library\n\n### 6. Performance Characteristics\n- **Memory bandwidth**: GB/s for data transfer\n- **Compute throughput**: TFLOPS for different precisions\n - FP64 (double precision)\n - FP32 (single precision)\n - FP16 (half precision)\n - INT8 (integer quantization)\n - TF32 (Tensor Float 32)\n- **Tensor core performance**: Dedicated AI acceleration\n- **Sparse tensor support**: Structured sparsity acceleration\n\n### 7. Model Size Compatibility\n- **VRAM capacity**: Total GPU memory\n- **Practical model sizes**: Estimated model capacity\n - Small models: < 1B parameters\n - Medium models: 1B-7B parameters\n - Large models: 7B-70B parameters\n - Very large models: > 70B parameters\n- **Batch size implications**: VRAM for different batch sizes\n- **Multi-GPU potential**: Scaling across GPUs\n\n### 8. Container and Virtualization Support\n- **Docker NVIDIA runtime**: nvidia-docker/NVIDIA Container Toolkit\n- **Docker ROCm runtime**: ROCm Docker support\n- **Podman GPU support**: GPU passthrough capability\n- **Kubernetes GPU**: Device plugin support\n- **GPU passthrough**: VM GPU assignment capability\n- **vGPU support**: Virtual GPU for multi-tenancy\n\n### 9. Monitoring and Profiling Tools\n- **nvidia-smi**: Real-time monitoring (NVIDIA)\n- **rocm-smi**: ROCm system management (AMD)\n- **Nsight Systems**: NVIDIA profiling suite\n- **Nsight Compute**: CUDA kernel profiler\n- **nvtop/radeontop**: Terminal GPU monitoring\n- **PyTorch profiler**: Framework-level profiling\n- **TensorBoard**: Training visualization\n\n### 10. Optimization Features\n- **Automatic mixed precision**: AMP support\n- **Gradient checkpointing**: Memory optimization\n- **Flash Attention**: Optimized attention mechanisms\n- **Quantization support**: INT8, INT4 inference\n- **Model compilation**: TorchScript, XLA, TensorRT\n- **Distributed training**: Multi-GPU training support\n- **CUDA graphs**: Kernel launch optimization\n\n### 11. Workload Suitability Assessment\n- **Training capability**: Suitable for training workloads\n- **Inference capability**: Suitable for inference\n- **Model type suitability**:\n - Computer vision (CNNs)\n - Natural language processing (Transformers)\n - Generative AI (Diffusion models, LLMs)\n - Reinforcement learning\n- **Performance tier**: Consumer, Professional, Data Center\n\n### 12. Bottleneck and Limitation Analysis\n- **Memory bottlenecks**: VRAM limitations for large models\n- **Compute bottlenecks**: GPU power for training speed\n- **PCIe bandwidth**: Data transfer limitations\n- **Driver limitations**: Missing features or bugs\n- **Power throttling**: Thermal or power constraints\n- **Multi-GPU scaling**: Efficiency of multi-GPU setup\n\n## Commands to Use\n\n**GPU and driver detection:**\n- `nvidia-smi` (NVIDIA)\n- `rocm-smi` (AMD)\n- `lspci | grep -i vga`\n- `lspci -v | grep -A 20 VGA`\n\n**NVIDIA driver details:**\n- `nvidia-smi -q`\n- `cat /proc/driver/nvidia/version`\n- `modinfo nvidia`\n- `nvidia-smi --query-gpu=driver_version --format=csv,noheader`\n\n**AMD driver details:**\n- `modinfo amdgpu`\n- `rocminfo`\n- `/opt/rocm/bin/rocm-smi --showdriverversion`\n\n**CUDA/ROCm installation:**\n- `nvcc --version` (CUDA compiler)\n- `which nvcc`\n- `ls /usr/local/cuda*/`\n- `echo $CUDA_HOME`\n- `hipcc --version` (ROCm)\n- `ls /opt/rocm/`\n\n**Compute capability:**\n- `nvidia-smi --query-gpu=compute_cap --format=csv,noheader`\n- `nvidia-smi -q | grep \"Compute Capability\"`\n\n**Libraries check:**\n- `ldconfig -p | grep cudnn`\n- `ldconfig -p | grep cublas`\n- `ldconfig -p | grep tensorrt`\n- `ldconfig -p | grep nccl`\n- `ls /usr/lib/x86_64-linux-gnu/ | grep -i cuda`\n\n**Python framework check:**\n- `python3 -c \"import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}, Version: {torch.version.cuda}')\"`\n- `python3 -c \"import tensorflow as tf; print(f'TensorFlow: {tf.__version__}, GPU: {tf.config.list_physical_devices(\\\"GPU\\\")}')\"`\n- `python3 -c \"import torch; print(f'Tensor Cores: {torch.cuda.get_device_capability()}')\"`\n\n**Container runtime:**\n- `docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi`\n- `which nvidia-container-cli`\n- `nvidia-container-cli info`\n\n**OpenCL:**\n- `clinfo`\n- `clinfo | grep \"Device Name\"`\n\n**System libraries:**\n- `dpkg -l | grep -i cuda`\n- `dpkg -l | grep -i nvidia`\n- `dpkg -l | grep -i rocm`\n\n**Performance info:**\n- `nvidia-smi --query-gpu=name,memory.total,memory.free,driver_version,compute_cap --format=csv`\n- `nvidia-smi dmon -s pucvmet` (dynamic monitoring)\n\n## Output Format\n\n### Executive Summary\n```\nGPU: [model]\nDriver: [proprietary/open] v[version] ([status])\nCompute: [CUDA/ROCm] v[version] (Compute [capability])\nAI/ML Readiness: [Ready/Partial/Not Ready]\nBest For: [Training/Inference/Both]\nRecommended Frameworks: [PyTorch, TensorFlow, etc.]\n```\n\n### Detailed AI/ML Assessment\n\n**Driver Status:**\n- Type: [Proprietary/Open Source]\n- Version: [version number]\n- Release Date: [date]\n- Status: [Loaded/Error]\n- Kernel Module: [module] ([loaded/not loaded])\n- Latest Available: [version]\n- Update Recommended: [Yes/No]\n- Secure Boot: [Compatible/Issue]\n\n**Compute Framework Availability:**\n- CUDA Toolkit: [Installed/Not Installed] - v[version]\n- CUDA Driver API: v[version]\n- ROCm: [Installed/Not Installed] - v[version]\n- OpenCL: [Available/Not Available] - v[version]\n- Compute Capability: [X.X] ([architecture name])\n\n**GPU Compute Specifications:**\n- Architecture: [Turing/Ampere/Ada/RDNA3/Xe]\n- Tensor Cores: [Yes/No] - [Generation]\n- CUDA Cores / SPs: [count]\n- VRAM: [GB] [memory type]\n- Memory Bandwidth: [GB/s]\n- Precision Support:\n - FP64: [TFLOPS]\n - FP32: [TFLOPS]\n - FP16: [TFLOPS]\n - INT8: [TOPS]\n - TF32: [Yes/No]\n\n**AI/ML Libraries:**\n- cuDNN: [version] ([installed/missing])\n- cuBLAS: [version] ([installed/missing])\n- TensorRT: [version] ([installed/missing])\n- NCCL: [version] ([installed/missing])\n- MIOpen: [version] (AMD only)\n- rocBLAS: [version] (AMD only)\n\n**Deep Learning Framework Support:**\n- PyTorch: [version]\n - CUDA Enabled: [Yes/No]\n - CUDA Version: [version]\n - cuDNN Version: [version]\n- TensorFlow: [version]\n - GPU Support: [Yes/No]\n - CUDA Version: [version]\n- JAX: [installed/not installed]\n- ONNX Runtime: [GPU backend available]\n\n**Container Support:**\n- NVIDIA Container Toolkit: [installed/not installed]\n- Docker GPU Access: [working/not working]\n- Podman GPU Support: [available]\n\n**Model Capacity Estimates:**\n- Small Models (< 1B params): [batch size X]\n- Medium Models (1B-7B params): [batch size X]\n- Large Models (7B-13B params): [batch size X]\n- Very Large Models (13B-70B params): [requires multi-GPU or not possible]\n\nExample workload estimates based on [GB] VRAM:\n- LLaMA 7B: [inference only/training possible]\n- Stable Diffusion: [batch size X]\n- BERT Base: [batch size X]\n- GPT-2: [batch size X]\n\n**Workload Suitability:**\n- Training:\n - Small models: [Excellent/Good/Fair/Poor]\n - Medium models: [rating]\n - Large models: [rating]\n- Inference:\n - Real-time: [Excellent/Good/Fair/Poor]\n - Batch: [rating]\n - Low-latency: [rating]\n\n**Use Case Recommendations:**\n- Computer Vision (CNNs): [Excellent/Good/Fair/Poor]\n- NLP (Transformers): [rating]\n- Generative AI (LLMs): [rating]\n- Diffusion Models: [rating]\n- Reinforcement Learning: [rating]\n\n**Performance Tier:**\n- Category: [Consumer/Professional/Data Center]\n- Training Performance: [rating]\n- Inference Performance: [rating]\n- Multi-GPU Scaling: [available/not available]\n\n**Optimization Features Available:**\n- Automatic Mixed Precision: [Yes/No]\n- Tensor Core Utilization: [Yes/No]\n- TensorRT Optimization: [Available]\n- Flash Attention: [Supported]\n- INT8 Quantization: [Supported]\n- Multi-GPU Training: [Possible with [count] GPUs]\n\n**Limitations and Bottlenecks:**\n- VRAM Constraint: [assessment]\n- Memory Bandwidth: [adequate/limited]\n- Compute Throughput: [assessment]\n- PCIe Bottleneck: [yes/no]\n- Driver Limitations: [any known issues]\n- Power/Thermal: [throttling concerns]\n\n**Recommendations:**\n1. [Driver update/optimization suggestions]\n2. [Framework installation recommendations]\n3. [Workload optimization suggestions]\n4. [Hardware upgrade path if applicable]\n5. [Container/virtualization setup if beneficial]\n\n### AI/ML Readiness Scorecard\n\n```\nDriver Setup: [\u2713/\u2717/\u26a0] [details]\nCUDA/ROCm Install: [\u2713/\u2717/\u26a0] [details]\nFramework Support: [\u2713/\u2717/\u26a0] [details]\nLibrary Ecosystem: [\u2713/\u2717/\u26a0] [details]\nContainer Runtime: [\u2713/\u2717/\u26a0] [details]\nVRAM Capacity: [\u2713/\u2717/\u26a0] [details]\nCompute Performance: [\u2713/\u2717/\u26a0] [details]\n\nOverall Readiness: [Ready/Needs Setup/Limited/Not Suitable]\n```\n\n### AI-Readable JSON\n\n```json\n{\n \"driver\": {\n \"type\": \"proprietary|open_source\",\n \"version\": \"\",\n \"status\": \"loaded|error\",\n \"latest_available\": \"\",\n \"update_recommended\": false\n },\n \"compute_platform\": {\n \"cuda\": {\n \"installed\": false,\n \"version\": \"\",\n \"compute_capability\": \"\"\n },\n \"rocm\": {\n \"installed\": false,\n \"version\": \"\"\n },\n \"opencl\": {\n \"available\": false,\n \"version\": \"\"\n }\n },\n \"gpu_specs\": {\n \"architecture\": \"\",\n \"tensor_cores\": false,\n \"vram_gb\": 0,\n \"memory_bandwidth_gbs\": 0,\n \"fp32_tflops\": 0,\n \"fp16_tflops\": 0,\n \"int8_tops\": 0,\n \"tf32_support\": false\n },\n \"libraries\": {\n \"cudnn\": \"\",\n \"cublas\": \"\",\n \"tensorrt\": \"\",\n \"nccl\": \"\"\n },\n \"frameworks\": {\n \"pytorch\": {\n \"installed\": false,\n \"version\": \"\",\n \"cuda_available\": false\n },\n \"tensorflow\": {\n \"installed\": false,\n \"version\": \"\",\n \"gpu_available\": false\n }\n },\n \"container_support\": {\n \"nvidia_container_toolkit\": false,\n \"docker_gpu_working\": false\n },\n \"workload_suitability\": {\n \"training\": {\n \"small_models\": \"excellent|good|fair|poor\",\n \"medium_models\": \"\",\n \"large_models\": \"\"\n },\n \"inference\": {\n \"real_time\": \"\",\n \"batch\": \"\"\n }\n },\n \"model_capacity\": {\n \"vram_gb\": 0,\n \"small_model_batch_size\": 0,\n \"llama_7b_possible\": false,\n \"stable_diffusion_batch\": 0\n },\n \"optimization_features\": {\n \"amp_support\": false,\n \"tensor_core_utilization\": false,\n \"tensorrt_available\": false,\n \"int8_quantization\": false\n },\n \"bottlenecks\": {\n \"vram_limited\": false,\n \"compute_limited\": false,\n \"pcie_bottleneck\": false\n },\n \"ai_ml_readiness\": \"ready|needs_setup|limited|not_suitable\"\n}\n```\n\n## Execution Guidelines\n\n1. **Identify GPU vendor first**: NVIDIA, AMD, or Intel\n2. **Check driver installation**: Verify driver is loaded and working\n3. **Assess compute platform**: CUDA for NVIDIA, ROCm for AMD\n4. **Query compute capability**: Critical for framework compatibility\n5. **Check library installation**: cuDNN, TensorRT, etc.\n6. **Test framework access**: Try importing PyTorch/TensorFlow with GPU\n7. **Evaluate VRAM capacity**: Estimate model sizes\n8. **Check container support**: Important for ML workflows\n9. **Identify bottlenecks**: VRAM, compute, or driver issues\n10. **Provide actionable recommendations**: Setup steps or optimizations\n\n## Important Notes\n\n- NVIDIA GPUs have the most mature AI/ML ecosystem\n- CUDA compute capability determines supported features\n- cuDNN is critical for deep learning performance\n- VRAM is often the primary bottleneck for large models\n- Container runtimes simplify framework management\n- AMD ROCm support is improving but less mature than CUDA\n- Intel GPUs are emerging in AI/ML space\n- Tensor cores provide significant speedup for mixed precision\n- Driver version must match CUDA toolkit requirements\n- Some features require specific GPU generations\n- Multi-GPU setups require additional configuration\n- Consumer GPUs can be effective for smaller workloads\n- Professional/datacenter GPUs offer better reliability and support\n\nBe thorough and practical - provide a clear assessment of AI/ML readiness and actionable next steps.\n", "filepath": "ai-tools/gpu-ai-ml-assessment.md" }, { "name": "setup-speech-to-text", "category": "ai-tools", "description": "Check installed STT apps and suggest installations including local Whisper", "tags": [ "ai", "stt", "whisper", "speech-recognition", "audio", "project", "gitignored" ], "content": "You are helping the user set up speech-to-text applications including local Whisper.\n\n## Process\n\n1. **Check currently installed STT apps**\n - System packages: `dpkg -l | grep -E \"whisper|speech|voice\"`\n - Python packages: `pip list | grep -E \"whisper|speech|vosk\"`\n - Check `~/programs/ai-ml/` for installed apps\n\n2. **Suggest STT installation candidates**\n\n **Whisper (OpenAI) - Recommended:**\n - Best quality, local inference\n - Multiple model sizes available\n - Multilingual support\n\n **Other options:**\n - Vosk - Lightweight, offline\n - Coqui STT - Mozilla's solution\n - SpeechNote - Simple GUI\n - Subtitle Edit - Video subtitling\n - Subtld - Automatic subtitles\n\n3. **Install Whisper (local)**\n\n **Method 1: Using pip (simple)**\n ```bash\n pip install openai-whisper\n ```\n\n **Method 2: Using conda (recommended)**\n ```bash\n conda create -n whisper python=3.11 -y\n conda activate whisper\n pip install openai-whisper\n ```\n\n **Install dependencies:**\n ```bash\n # For audio processing\n sudo apt install ffmpeg\n pip install setuptools-rust\n ```\n\n4. **Install faster-whisper (optimized)**\n ```bash\n pip install faster-whisper\n ```\n - Uses CTranslate2 for faster inference\n - Lower VRAM usage\n\n5. **Install WhisperX (advanced)**\n ```bash\n pip install whisperx\n ```\n - Includes alignment and diarization\n - Better timestamps\n\n6. **Download Whisper models**\n - Models are downloaded automatically on first use\n - Sizes: tiny, base, small, medium, large\n - Suggest based on VRAM:\n - < 4GB: tiny or base\n - 4-8GB: small or medium\n - 8GB+: large\n\n7. **Test installation**\n ```bash\n whisper audio.mp3 --model base --language en\n ```\n\n8. **Install GUI options**\n\n **Whisper Desktop:**\n - Check if available as AppImage or Flatpak\n\n **Subtitle Edit:**\n ```bash\n sudo apt install subtitleeditor\n ```\n\n **Custom GUI:**\n - Suggest installing gradio-based Whisper UIs\n\n9. **Create helper script**\n - Offer to create `~/scripts/transcribe.sh`:\n ```bash\n #!/bin/bash\n whisper \"$1\" --model medium --language en --output_format txt\n ```\n\n10. **Suggest workflows**\n - Real-time transcription\n - Batch processing\n - Video subtitling\n - Meeting transcription\n\n## Output\n\nProvide a summary showing:\n- Currently installed STT applications\n- Whisper installation status and model sizes\n- GPU acceleration status\n- Suggested models based on hardware\n- Example commands for transcription\n- GUI options available\n- Helper scripts created", "filepath": "ai-tools/setup-speech-to-text.md" } ], "backup": [ { "name": "identify-backup-targets", "category": "backup", "description": "Identify filesystem parts to backup and suggest inclusion patterns", "tags": [ "backup", "filesystem", "strategy", "rclone", "project", "gitignored" ], "content": "You are helping the user identify which parts of their filesystem should be backed up and create appropriate inclusion patterns.\n\n## Process\n\n1. **Analyze filesystem structure**\n - Home directory size: `du -sh ~/*` or `du -h --max-depth=1 ~ | sort -h`\n - System directories to consider\n - External drives/mounts\n\n2. **Categorize data by importance**\n\n **Critical (must backup):**\n - Documents: `~/Documents`\n - Development work: `~/repos`\n - Configuration files: `~/.config`, `~/.ssh`, `~/.gnupg`\n - Scripts: `~/scripts`, `~/.local/bin`\n - AI documentation: `~/ai-docs`\n\n **Important (should backup):**\n - Pictures/Photos\n - Videos (personal)\n - Music (if not streaming)\n - Downloads (selective)\n - Email (if local)\n\n **Optional (consider backing up):**\n - Application data: `~/.local/share`\n - Browser data (bookmarks, passwords)\n - Game saves\n\n **Exclude (don't backup):**\n - Caches: `~/.cache`\n - Temporary files: `/tmp`, `~/.tmp`\n - Virtual machines/disk images\n - Node modules: `node_modules/`\n - Python venvs: `venv/`, `.venv/`\n - Build artifacts: `target/`, `build/`, `dist/`\n - Large media files (if cloud-synced elsewhere)\n\n3. **Identify special considerations**\n - Check for large directories: `du -h --max-depth=2 ~ | sort -h | tail -20`\n - Look for media libraries\n - Identify development projects with dependencies\n - Find version-controlled repos (can skip .git if remote exists)\n\n4. **Create inclusion/exclusion patterns**\n\n **For rclone:**\n ```\n # Include patterns\n + /Documents/**\n + /repos/**\n + /.config/**\n + /.ssh/**\n + /.gnupg/**\n + /scripts/**\n + /.local/bin/**\n + /ai-docs/**\n + /Pictures/**\n\n # Exclude patterns\n - /.cache/**\n - /.local/share/Trash/**\n - /**/node_modules/**\n - /**/.venv/**\n - /**/venv/**\n - /**/__pycache__/**\n - /**/.git/**\n - /.thumbnails/**\n ```\n\n **For rsync:**\n ```bash\n --include='/Documents/***'\n --include='/repos/***'\n --exclude='**/.cache/'\n --exclude='**/node_modules/'\n --exclude='**/.venv/'\n ```\n\n5. **Calculate backup size**\n - Estimate total backup size based on included directories\n - Consider compression potential\n - Plan for growth\n\n6. **Suggest backup frequency**\n - Critical data: Daily or real-time sync\n - Important data: Weekly\n - Optional data: Monthly\n - System configs: After changes\n\n7. **Create backup configuration file**\n - Offer to create `~/scripts/backup-config.txt` with patterns\n - Create `~/scripts/backup-estimate.sh` to calculate size\n\n## Output\n\nProvide a report showing:\n- Categorized list of directories to backup\n- Size estimates for each category\n- Recommended inclusion/exclusion patterns (rclone and rsync format)\n- Total estimated backup size\n- Suggested backup frequency for each category\n- Configuration file content", "filepath": "backup/identify-backup-targets.md" } ], "bluetooth": [ { "name": "reset-bluetooth", "category": "bluetooth", "description": "", "tags": [], "content": "# Reset Bluetooth\n\nYou are helping the user completely reset the Bluetooth subsystem to fix persistent issues.\n\n## Task\n\n**WARNING:** This will remove all paired Bluetooth devices and require re-pairing.\n\n1. **Ask user to confirm:**\n - This will unpair all Bluetooth devices\n - Devices will need to be paired again\n - Bluetooth service will be restarted\n\n2. **Stop Bluetooth service:**\n ```bash\n # Stop Bluetooth\n sudo systemctl stop bluetooth\n\n # Verify stopped\n systemctl is-active bluetooth\n ```\n\n3. **Kill any remaining Bluetooth processes:**\n ```bash\n # Kill bluetoothd\n sudo killall bluetoothd 2>/dev/null\n\n # Kill bluetooth-related processes\n ps aux | grep bluetooth | grep -v grep\n sudo killall -9 bluez-alsa bluez-obexd 2>/dev/null\n ```\n\n4. **Remove Bluetooth pairing cache:**\n ```bash\n # Remove paired devices database\n sudo rm -rf /var/lib/bluetooth/*\n\n # Show what was removed\n echo \"Removed all paired device data from /var/lib/bluetooth/\"\n ```\n\n5. **Clear user Bluetooth cache:**\n ```bash\n # Remove user Bluetooth cache\n rm -rf ~/.cache/bluetooth 2>/dev/null\n rm -rf ~/.local/share/bluetooth 2>/dev/null\n\n echo \"Cleared user Bluetooth cache\"\n ```\n\n6. **Reset Bluetooth modules:**\n ```bash\n # Remove Bluetooth kernel modules\n sudo modprobe -r bnep\n sudo modprobe -r bluetooth\n sudo modprobe -r btusb\n sudo modprobe -r btintel # Intel Bluetooth\n sudo modprobe -r btrtl # Realtek Bluetooth\n\n echo \"Bluetooth modules unloaded\"\n sleep 2\n ```\n\n7. **Reload Bluetooth modules:**\n ```bash\n # Reload modules\n sudo modprobe bluetooth\n sudo modprobe btusb\n sudo modprobe bnep\n\n # Load vendor-specific modules if needed\n sudo modprobe btintel 2>/dev/null\n sudo modprobe btrtl 2>/dev/null\n\n echo \"Bluetooth modules reloaded\"\n ```\n\n8. **Reset HCI interface:**\n ```bash\n # Bring down Bluetooth controller\n sudo hciconfig hci0 down 2>/dev/null\n\n sleep 1\n\n # Bring it back up\n sudo hciconfig hci0 up 2>/dev/null\n\n # Reset the controller\n sudo hciconfig hci0 reset 2>/dev/null\n\n echo \"HCI interface reset\"\n ```\n\n9. **Unblock Bluetooth:**\n ```bash\n # Unblock Bluetooth (soft and hard)\n sudo rfkill unblock bluetooth\n\n # Verify not blocked\n rfkill list bluetooth\n ```\n\n10. **Start Bluetooth service:**\n ```bash\n # Start and enable Bluetooth\n sudo systemctl start bluetooth\n sudo systemctl enable bluetooth\n\n # Wait for service to fully start\n sleep 3\n\n # Check status\n systemctl status bluetooth --no-pager\n ```\n\n11. **Power on Bluetooth controller:**\n ```bash\n # Turn on Bluetooth\n bluetoothctl power on\n\n # Set as discoverable (optional)\n bluetoothctl discoverable on\n\n # Set pairable\n bluetoothctl pairable on\n\n # Show controller info\n bluetoothctl show\n ```\n\n12. **Verify Bluetooth is working:**\n ```bash\n # Check service\n echo \"Service status: $(systemctl is-active bluetooth)\"\n\n # Check controller\n echo \"Controller powered: $(bluetoothctl show | grep Powered)\"\n\n # Check for adapters\n hciconfig -a\n\n # Start scanning to test\n echo \"Starting scan for 10 seconds...\"\n timeout 10 bluetoothctl scan on\n\n bluetoothctl devices\n ```\n\n13. **Create reset report:**\n ```bash\n cat > /tmp/bluetooth-reset-report.txt << EOF\n Bluetooth Reset Report\n ======================\n Date: $(date)\n\n === Service Status ===\n $(systemctl status bluetooth --no-pager)\n\n === Controller Info ===\n $(bluetoothctl show)\n\n === Hardware Info ===\n $(hciconfig -a)\n\n === RF Kill Status ===\n $(rfkill list bluetooth)\n\n === Loaded Modules ===\n $(lsmod | grep -E \"bluetooth|bnep|btusb\")\n\n === Kernel Messages (last 20) ===\n $(dmesg | grep -i bluetooth | tail -20)\n\n Next Steps:\n 1. Your Bluetooth has been reset\n 2. All previous pairings have been removed\n 3. Put your device in pairing mode\n 4. Use: bluetoothctl scan on\n 5. Use: bluetoothctl pair \n 6. Use: bluetoothctl connect \n EOF\n\n cat /tmp/bluetooth-reset-report.txt\n ```\n\n## USB Bluetooth Adapter Reset\n\nIf using USB Bluetooth adapter:\n```bash\n# Find USB Bluetooth device\nusb_bt=$(lsusb | grep -i bluetooth | head -1)\necho \"Found: $usb_bt\"\n\n# Get bus and device numbers\nbus=$(echo $usb_bt | awk '{print $2}')\ndev=$(echo $usb_bt | awk '{print $4}' | tr -d ':')\n\n# Reset USB device\necho \"Resetting USB device: Bus $bus Device $dev\"\nsudo usb_modeswitch -v 0x$(lsusb | grep -i bluetooth | awk '{print $6}' | cut -d: -f1) \\\n -p 0x$(lsusb | grep -i bluetooth | awk '{print $6}' | cut -d: -f2) \\\n --reset-usb 2>/dev/null\n\n# Alternative: unbind and rebind\ndevice_path=\"/sys/bus/usb/devices/$bus-*\"\necho \"Unbinding and rebinding USB device\"\necho \"$bus-*\" | sudo tee /sys/bus/usb/drivers/usb/unbind 2>/dev/null\nsleep 2\necho \"$bus-*\" | sudo tee /sys/bus/usb/drivers/usb/bind 2>/dev/null\n```\n\n## Firmware Reload\n\nIf firmware issues persist:\n```bash\n# Check firmware files\nls -l /lib/firmware/ | grep -i bluetooth\n\n# Reload firmware (device-specific)\n# For Intel Bluetooth:\nsudo rmmod btintel\nsudo modprobe btintel\n\n# For Realtek:\nsudo rmmod btrtl\nsudo modprobe btrtl\n\n# Check if firmware loaded\ndmesg | grep -i \"bluetooth.*firmware\" | tail -5\n```\n\n## Complete System Reset\n\nNuclear option if nothing else works:\n```bash\n# Stop everything\nsudo systemctl stop bluetooth\nsudo killall -9 bluetoothd\n\n# Remove all data\nsudo rm -rf /var/lib/bluetooth/*\nrm -rf ~/.cache/bluetooth\nrm -rf ~/.local/share/bluetooth\n\n# Remove and reload all modules\nsudo modprobe -r bnep bluetooth btusb btintel btrtl\nsleep 3\nsudo modprobe bluetooth btusb bnep\n\n# Remove config (will regenerate)\nsudo mv /etc/bluetooth/main.conf /etc/bluetooth/main.conf.backup\n\n# Reboot system\necho \"A system reboot is recommended for complete reset\"\n# sudo reboot\n```\n\n## Post-Reset Pairing\n\nGuide user through pairing:\n```bash\ncat << 'EOF'\nTo pair a device after reset:\n\n1. Put device in pairing mode\n2. Start Bluetooth scan:\n bluetoothctl scan on\n\n3. Find your device MAC address in the list\n\n4. Pair the device:\n bluetoothctl pair XX:XX:XX:XX:XX:XX\n\n5. Trust the device:\n bluetoothctl trust XX:XX:XX:XX:XX:XX\n\n6. Connect:\n bluetoothctl connect XX:XX:XX:XX:XX:XX\n\n7. Stop scanning:\n bluetoothctl scan off\n\nFor audio devices, you may need to restart PipeWire:\n systemctl --user restart pipewire wireplumber\nEOF\n```\n\n## Troubleshooting\n\n**Reset didn't work:**\n```bash\n# Try full reboot\nsudo reboot\n\n# Or try removing Bluetooth packages and reinstalling\n# sudo apt remove bluez bluetooth\n# sudo apt install bluez bluetooth\n```\n\n**Service won't start:**\n```bash\n# Check for errors\njournalctl -u bluetooth --since \"5 minutes ago\" --no-pager\n\n# Check if masked\nsudo systemctl unmask bluetooth\n\n# Force restart\nsudo systemctl restart bluetooth\n```\n\n**No adapters found:**\n```bash\n# Check hardware detection\nlsusb | grep -i bluetooth\nlspci | grep -i bluetooth\n\n# Check kernel modules\nlsmod | grep bluetooth\n```\n\n## Notes\n\n- Backup `/var/lib/bluetooth/` before reset if you want to preserve pairings\n- Some devices may require specific PIN codes during pairing\n- Audio devices may need additional PipeWire/PulseAudio configuration\n- Bluetooth 5.0+ devices are backward compatible\n- LE (Low Energy) devices may require special pairing procedures\n- System reboot recommended after complete reset\n- Check `/var/log/syslog` for detailed Bluetooth errors\n", "filepath": "bluetooth/reset-bluetooth.md" }, { "name": "troubleshoot-bluetooth", "category": "bluetooth", "description": "", "tags": [], "content": "# Troubleshoot Bluetooth\n\nYou are helping the user diagnose and fix Bluetooth connectivity issues.\n\n## Task\n\n1. **Check Bluetooth service status:**\n ```bash\n # Service status\n systemctl status bluetooth\n\n # Is it running?\n systemctl is-active bluetooth\n\n # Is it enabled to start on boot?\n systemctl is-enabled bluetooth\n ```\n\n2. **Check Bluetooth hardware:**\n ```bash\n # List Bluetooth adapters\n hciconfig -a\n\n # Or using rfkill\n rfkill list bluetooth\n\n # Check if Bluetooth is soft/hard blocked\n rfkill list\n\n # Detailed hardware info\n lsusb | grep -i bluetooth\n lspci | grep -i bluetooth\n ```\n\n3. **Check Bluetooth controller status:**\n ```bash\n # Controller info\n bluetoothctl show\n\n # List controllers\n bluetoothctl list\n\n # Check if powered on\n bluetoothctl show | grep \"Powered:\"\n ```\n\n4. **Check for firmware issues:**\n ```bash\n # Look for firmware errors in journal\n journalctl -u bluetooth | grep -i firmware\n\n # Check dmesg for Bluetooth firmware loading\n dmesg | grep -i \"bluetooth\\|firmware\" | tail -20\n\n # Check kernel modules\n lsmod | grep bluetooth\n ```\n\n5. **Common fixes - Restart Bluetooth:**\n ```bash\n # Restart Bluetooth service\n sudo systemctl restart bluetooth\n\n # Reset Bluetooth controller\n sudo hciconfig hci0 down\n sudo hciconfig hci0 up\n\n # Or use bluetoothctl\n bluetoothctl power off\n sleep 2\n bluetoothctl power on\n ```\n\n6. **Unblock Bluetooth if blocked:**\n ```bash\n # Check if blocked\n rfkill list bluetooth\n\n # Unblock if soft-blocked\n sudo rfkill unblock bluetooth\n\n # Power on controller\n bluetoothctl power on\n ```\n\n7. **Scan for devices:**\n ```bash\n # Start scanning\n bluetoothctl scan on\n\n # Wait 10 seconds\n sleep 10\n\n # List discovered devices\n bluetoothctl devices\n\n # Stop scanning\n bluetoothctl scan off\n ```\n\n8. **Check paired devices:**\n ```bash\n # List paired devices\n bluetoothctl paired-devices\n\n # Show info about specific device\n bluetoothctl info DEVICE_MAC_ADDRESS\n ```\n\n9. **Remove and re-pair problematic device:**\n ```bash\n # Ask user for device MAC address\n # Remove device\n bluetoothctl remove DEVICE_MAC_ADDRESS\n\n # Scan again\n bluetoothctl scan on\n sleep 10\n\n # Pair device\n bluetoothctl pair DEVICE_MAC_ADDRESS\n\n # Trust device\n bluetoothctl trust DEVICE_MAC_ADDRESS\n\n # Connect to device\n bluetoothctl connect DEVICE_MAC_ADDRESS\n ```\n\n10. **Check Bluetooth configuration:**\n ```bash\n # Main config file\n cat /etc/bluetooth/main.conf\n\n # Check for problematic settings\n grep -E \"^(Name|Class|DiscoverableTimeout|AutoEnable)\" /etc/bluetooth/main.conf\n ```\n\n11. **Check for interference:**\n ```bash\n # Check WiFi and Bluetooth coexistence\n iw dev | grep -i channel\n\n # Bluetooth operates on 2.4GHz\n # WiFi on 2.4GHz can interfere\n echo \"Consider switching WiFi to 5GHz if available\"\n ```\n\n12. **Detailed diagnostics:**\n ```bash\n # Bluetooth subsystem logs\n journalctl -u bluetooth --since \"1 hour ago\" --no-pager\n\n # Kernel Bluetooth messages\n dmesg | grep -i bluetooth | tail -30\n\n # Check for errors\n journalctl -u bluetooth | grep -iE \"error|fail|timeout\"\n ```\n\n13. **Audio-specific Bluetooth issues:**\n ```bash\n # Check PulseAudio/PipeWire Bluetooth modules\n pactl list | grep -i bluetooth\n\n # PipeWire Bluetooth\n pw-cli ls Device | grep -i bluetooth\n\n # Load Bluetooth module (PulseAudio)\n # pactl load-module module-bluetooth-discover\n\n # Restart audio with Bluetooth\n systemctl --user restart pipewire pipewire-pulse wireplumber\n ```\n\n14. **Generate diagnostic report:**\n ```bash\n cat > /tmp/bluetooth-diagnostic.txt << EOF\n Bluetooth Diagnostic Report\n ===========================\n Date: $(date)\n\n === Service Status ===\n $(systemctl status bluetooth --no-pager)\n\n === Hardware Detection ===\n $(rfkill list bluetooth)\n $(hciconfig -a)\n\n === Controller Info ===\n $(bluetoothctl show)\n\n === Paired Devices ===\n $(bluetoothctl paired-devices)\n\n === Recent Bluetooth Logs ===\n $(journalctl -u bluetooth --since \"1 hour ago\" --no-pager | tail -50)\n\n === Kernel Messages ===\n $(dmesg | grep -i bluetooth | tail -30)\n\n === Loaded Modules ===\n $(lsmod | grep bluetooth)\n\n === Configuration ===\n $(grep -v \"^#\\|^$\" /etc/bluetooth/main.conf)\n EOF\n\n cat /tmp/bluetooth-diagnostic.txt\n ```\n\n## Common Issues & Solutions\n\n**Bluetooth not starting:**\n```bash\n# Check if service masked\nsystemctl unmask bluetooth\nsudo systemctl enable --now bluetooth\n```\n\n**Device pairs but won't connect:**\n```bash\n# Remove and re-pair\nbluetoothctl remove DEVICE_MAC\nbluetoothctl scan on\n# Wait for device to appear\nbluetoothctl pair DEVICE_MAC\nbluetoothctl trust DEVICE_MAC\nbluetoothctl connect DEVICE_MAC\n```\n\n**Audio stuttering or quality issues:**\n```bash\n# Check Bluetooth audio codec\npactl list | grep -i \"bluetooth\\|codec\"\n\n# Try different codec\n# Edit /etc/bluetooth/main.conf\n# [General]\n# Enable=Source,Sink,Media,Socket\n```\n\n**Bluetooth adapter not found:**\n```bash\n# Reload Bluetooth module\nsudo rmmod btusb\nsudo modprobe btusb\n\n# Or all Bluetooth modules\nsudo rmmod bnep btusb bluetooth\nsudo modprobe bluetooth\nsudo modprobe btusb\nsudo modprobe bnep\n```\n\n**Random disconnections:**\n```bash\n# Increase page timeout\nsudo hciconfig hci0 pageto 8192\n\n# Disable power management for USB Bluetooth\n# Add to /etc/udev/rules.d/50-bluetooth.rules:\n# ACTION==\"add\", SUBSYSTEM==\"usb\", ATTR{idVendor}==\"YOUR_VENDOR\", ATTR{power/autosuspend}=\"-1\"\n```\n\n**LE (Low Energy) devices not working:**\n```bash\n# Enable LE in bluetoothctl\nbluetoothctl\n[bluetooth]# menu scan\n[bluetooth]# transport le\n[bluetooth]# scan on\n```\n\n## Reset Bluetooth Completely\n\nIf nothing else works:\n```bash\n# Stop Bluetooth\nsudo systemctl stop bluetooth\n\n# Remove paired devices cache\nsudo rm -rf /var/lib/bluetooth/*\n\n# Restart Bluetooth\nsudo systemctl start bluetooth\n\n# Re-pair all devices\n```\n\n## Notes\n\n- Most Bluetooth adapters work with btusb kernel module\n- Firmware files typically in `/lib/firmware/`\n- Check if kernel has Bluetooth support: `zgrep BLUETOOTH /proc/config.gz`\n- Some devices require specific pairing codes\n- Audio devices may need PulseAudio/PipeWire Bluetooth modules\n- USB Bluetooth adapters: check with `usb-devices | grep -A 10 Bluetooth`\n- Bluetooth 5.x adapters are backward compatible with older devices\n- Distance and obstacles affect connection quality\n", "filepath": "bluetooth/troubleshoot-bluetooth.md" } ], "configuration": [ { "name": "add-bash-alias", "category": "configuration", "description": "", "tags": [], "content": "I would like to add a bash alias. I will provide the alias or ask for your suggestions as to an appropriate alias.\n\nTo come up with an appropriate alias - identify one that is unlikely to conflict with other aliases.\n\nEither way:\n\nCreate the new bash alias(es) in ~/.bash_aliases\n\nThen you can use sourcebash to reload the bash alias file.\n", "filepath": "configuration/add-bash-alias.md" }, { "name": "check-git-config", "category": "configuration", "description": "Check user's basic git config and make any desired edits", "tags": [ "git", "configuration", "settings", "development", "project", "gitignored" ], "content": "You are helping the user review and configure their git settings.\n\n## Process\n\n1. **Display current git configuration**\n - Global config: `git config --global --list`\n - Local config (if in repo): `git config --local --list`\n - Show config file location: `git config --global --list --show-origin`\n\n2. **Check essential settings**\n\n **User identity:**\n ```bash\n git config --global user.name\n git config --global user.email\n ```\n - Verify these are set correctly\n - If not set, ask user for values\n\n **Default editor:**\n ```bash\n git config --global core.editor\n ```\n - Suggest: `nano`, `vim`, `code --wait`, etc.\n\n **Default branch name:**\n ```bash\n git config --global init.defaultBranch\n ```\n - Recommend: `main` or `master`\n\n3. **Suggest useful configurations**\n\n **Color output:**\n ```bash\n git config --global color.ui auto\n ```\n\n **Credential helper:**\n ```bash\n git config --global credential.helper store\n # or for cache: git config --global credential.helper 'cache --timeout=3600'\n ```\n\n **Push behavior:**\n ```bash\n git config --global push.default simple\n git config --global push.autoSetupRemote true\n ```\n\n **Pull behavior:**\n ```bash\n git config --global pull.rebase false # merge (default)\n # or: git config --global pull.rebase true # rebase\n # or: git config --global pull.ff only # fast-forward only\n ```\n\n **Line endings:**\n ```bash\n git config --global core.autocrlf input # Linux/Mac\n ```\n\n **Diff and merge tools:**\n ```bash\n git config --global diff.tool meld\n git config --global merge.tool meld\n ```\n\n4. **Aliases (optional but useful)**\n Ask if user wants common aliases:\n ```bash\n git config --global alias.st status\n git config --global alias.co checkout\n git config --global alias.br branch\n git config --global alias.ci commit\n git config --global alias.unstage 'reset HEAD --'\n git config --global alias.last 'log -1 HEAD'\n git config --global alias.lg \"log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit\"\n ```\n\n5. **GPG signing (optional)**\n ```bash\n git config --global commit.gpgsign true\n git config --global user.signingkey \n ```\n\n6. **Show updated configuration**\n - Display all global settings\n - Highlight changes made\n\n## Output\n\nProvide a summary showing:\n- Current git configuration\n- Missing essential settings\n- Recommended configurations\n- Changes made (if any)\n- Next steps or additional suggestions", "filepath": "configuration/check-git-config.md" }, { "name": "check-global-gitignore", "category": "configuration", "description": "Check if user has global gitignore and create one if not", "tags": [ "git", "configuration", "gitignore", "development", "project", "gitignored" ], "content": "You are helping the user set up a global gitignore file.\n\n## Process\n\n1. **Check if global gitignore exists**\n - Run: `git config --global core.excludesfile`\n - Check common locations:\n - `~/.gitignore_global`\n - `~/.gitignore`\n - `~/.config/git/ignore`\n\n2. **If global gitignore doesn't exist, create one**\n - Choose location: `~/.gitignore_global`\n - Configure git to use it:\n ```bash\n git config --global core.excludesfile ~/.gitignore_global\n ```\n\n3. **Populate with common patterns**\n - Create comprehensive gitignore with patterns for:\n\n **Operating System:**\n ```\n # macOS\n .DS_Store\n .AppleDouble\n .LSOverride\n\n # Linux\n *~\n .directory\n .Trash-*\n\n # Windows\n Thumbs.db\n Desktop.ini\n ```\n\n **IDEs and Editors:**\n ```\n # VS Code\n .vscode/\n *.code-workspace\n\n # JetBrains\n .idea/\n *.iml\n\n # Vim\n *.swp\n *.swo\n *~\n\n # Emacs\n *~\n \\#*\\#\n ```\n\n **Languages and Frameworks:**\n ```\n # Python\n __pycache__/\n *.py[cod]\n *$py.class\n .venv/\n venv/\n ENV/\n .Python\n *.egg-info/\n dist/\n build/\n\n # Node.js\n node_modules/\n npm-debug.log\n yarn-error.log\n .npm/\n\n # Ruby\n *.gem\n .bundle/\n vendor/bundle/\n\n # Rust\n target/\n Cargo.lock\n\n # Go\n *.exe\n *.test\n *.out\n ```\n\n **Build artifacts:**\n ```\n *.o\n *.a\n *.so\n *.dylib\n *.dll\n *.class\n *.jar\n ```\n\n **Misc:**\n ```\n # Logs\n *.log\n logs/\n\n # Temporary files\n *.tmp\n *.temp\n .cache/\n\n # Environment files\n .env\n .env.local\n\n # Database files\n *.sqlite\n *.db\n ```\n\n4. **Review existing gitignore if it exists**\n - Read current file\n - Suggest additions if patterns are missing\n - Offer to back up before modifying\n\n5. **Test the configuration**\n - Verify config: `git config --global core.excludesfile`\n - Show the file: `cat ~/.gitignore_global`\n\n## Output\n\nProvide a summary showing:\n- Global gitignore location\n- Whether it was created or already existed\n- List of patterns included\n- Verification of git configuration", "filepath": "configuration/check-global-gitignore.md" }, { "name": "check-path", "category": "configuration", "description": "", "tags": [], "content": "# Check and Analyze PATH\n\nYou are helping the user analyze what's on their PATH and suggest additions or improvements.\n\n## Your tasks:\n\n1. **Display current PATH:**\n ```bash\n echo $PATH | tr ':' '\\n'\n ```\n\n2. **Check which paths actually exist:**\n ```bash\n echo $PATH | tr ':' '\\n' | while read p; do\n if [ -d \"$p\" ]; then\n echo \"\u2713 $p\"\n else\n echo \"\u2717 $p (does not exist)\"\n fi\n done\n ```\n\n3. **Check for duplicate PATH entries:**\n ```bash\n echo $PATH | tr ':' '\\n' | sort | uniq -d\n ```\n\n4. **Identify where PATH is being set:**\n Check common locations:\n ```bash\n grep -n \"PATH\" ~/.bashrc ~/.bash_profile ~/.profile /etc/environment /etc/profile 2>/dev/null\n ```\n\n5. **Check for common development tool paths:**\n\n **Programming languages:**\n - Python user packages: `~/.local/bin`\n - Rust cargo: `~/.cargo/bin`\n - Go: `~/go/bin` or `$GOPATH/bin`\n - Ruby gems: Check with `gem environment`\n - Node/npm: Check with `npm config get prefix`\n\n **Package managers:**\n - Homebrew: `/home/linuxbrew/.linuxbrew/bin`\n - SDKMAN: `~/.sdkman/candidates/*/current/bin`\n - pipx: `~/.local/bin`\n\n **Version managers:**\n - pyenv: `~/.pyenv/bin`\n - rbenv: `~/.rbenv/bin`\n - nvm: (check ~/.nvm/)\n - asdf: `~/.asdf/bin`\n\n **System tools:**\n - User binaries: `~/bin`, `~/.local/bin`\n - Snap: `/snap/bin`\n - Flatpak: `/var/lib/flatpak/exports/bin`\n\n6. **Check what's installed in each PATH directory:**\n For each directory in PATH:\n ```bash\n echo \"Contents of $dir:\"\n ls -la \"$dir\" | head -10\n ```\n\n7. **Suggest missing common paths:**\n Check and suggest if not in PATH:\n\n - `~/.local/bin` (Python user packages, pipx)\n - `~/bin` (User scripts)\n - `~/.cargo/bin` (Rust packages)\n - `~/go/bin` (Go packages)\n - `/snap/bin` (Snap packages)\n - `~/.npm-global/bin` (npm global packages)\n\n For each missing path that has executables, suggest adding it.\n\n8. **Check for security issues:**\n - Warn if `.` (current directory) is in PATH\n - Warn if world-writable directories are in PATH:\n ```bash\n echo $PATH | tr ':' '\\n' | while read p; do\n if [ -d \"$p\" ] && [ -w \"$p\" ]; then\n ls -ld \"$p\"\n fi\n done\n ```\n\n9. **Check PATH order/precedence:**\n Explain that earlier paths take precedence.\n Show which binary would be executed:\n ```bash\n which -a python python3 java gcc git node npm\n ```\n\n10. **Check for conflicting tools:**\n ```bash\n type -a python\n type -a python3\n type -a java\n ```\n\n11. **Suggest PATH organization:**\n Recommended order:\n 1. User binaries (`~/bin`, `~/.local/bin`)\n 2. Version managers (pyenv, rbenv, nvm)\n 3. Language-specific paths (cargo, go)\n 4. Homebrew\n 5. System binaries (`/usr/local/bin`, `/usr/bin`, `/bin`)\n\n12. **Check environment-specific paths:**\n\n **Python:**\n ```bash\n python3 -m site --user-base\n # Suggests adding $(python3 -m site --user-base)/bin\n ```\n\n **Node/npm:**\n ```bash\n npm config get prefix\n # Suggests adding /bin\n ```\n\n **Go:**\n ```bash\n go env GOPATH\n # Suggests adding $GOPATH/bin\n ```\n\n **Rust:**\n ```bash\n echo $CARGO_HOME\n # Suggests adding ~/.cargo/bin\n ```\n\n13. **Generate suggested PATH setup:**\n Based on findings, create suggested additions for ~/.bashrc:\n\n ```bash\n # User binaries\n export PATH=\"$HOME/bin:$PATH\"\n export PATH=\"$HOME/.local/bin:$PATH\"\n\n # Python\n export PATH=\"$HOME/.local/bin:$PATH\"\n\n # Rust\n export PATH=\"$HOME/.cargo/bin:$PATH\"\n\n # Go\n export PATH=\"$HOME/go/bin:$PATH\"\n\n # SDKMAN\n # Added by sdkman-init.sh\n\n # pyenv\n export PYENV_ROOT=\"$HOME/.pyenv\"\n export PATH=\"$PYENV_ROOT/bin:$PATH\"\n eval \"$(pyenv init --path)\"\n\n # Homebrew\n eval \"$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)\"\n ```\n\n14. **Check for broken symlinks in PATH:**\n ```bash\n echo $PATH | tr ':' '\\n' | while read dir; do\n if [ -d \"$dir\" ]; then\n find \"$dir\" -maxdepth 1 -type l ! -exec test -e {} \\; -print 2>/dev/null\n fi\n done\n ```\n\n15. **Provide recommendations:**\n - Remove non-existent directories from PATH\n - Add missing common paths that have executables\n - Fix duplicate entries\n - Correct PATH order if needed\n - Remove security issues (`.` in PATH, world-writable dirs)\n - Consolidate PATH modifications into one file (prefer ~/.bashrc)\n - Document what each PATH addition is for\n\n16. **Show how to temporarily modify PATH:**\n ```bash\n # Add to front (takes precedence)\n export PATH=\"/new/path:$PATH\"\n\n # Add to end\n export PATH=\"$PATH:/new/path\"\n\n # Remove from PATH\n export PATH=$(echo $PATH | tr ':' '\\n' | grep -v \"/path/to/remove\" | tr '\\n' ':')\n ```\n\n17. **Show how to make PATH changes permanent:**\n ```bash\n echo 'export PATH=\"$HOME/bin:$PATH\"' >> ~/.bashrc\n source ~/.bashrc\n ```\n\n## Important notes:\n- Changes to PATH only affect current shell unless made permanent\n- Order matters - earlier paths have precedence\n- Don't add current directory (`.`) to PATH\n- Use absolute paths when possible\n- Source ~/.bashrc after changes: `source ~/.bashrc`\n- Some tools (pyenv, conda, nvm) modify PATH dynamically\n- Check for PATH modifications in multiple files\n", "filepath": "configuration/check-path.md" }, { "name": "debug-folder-permissions", "category": "configuration", "description": "", "tags": [], "content": "# Debug System Folder Permissions\n\nYou are helping the user debug systemwide folder permissions and ensure they are set appropriately.\n\n## Your tasks:\n\n1. **Gather information from user:**\n Ask:\n - Are they experiencing specific permission errors?\n - Which directories or operations are affected?\n - What user/group should have access?\n\n2. **Check common system directories:**\n\n **Root filesystem:**\n ```bash\n ls -ld /\n # Should be: drwxr-xr-x root root\n ```\n\n **Essential system directories:**\n ```bash\n ls -ld /bin /sbin /usr /usr/bin /usr/sbin /lib /lib64\n # Should be: drwxr-xr-x root root\n ```\n\n **Variable data:**\n ```bash\n ls -ld /var /var/log /var/tmp\n # /var: drwxr-xr-x root root\n # /var/log: drwxrwxr-x root syslog (or root root)\n # /var/tmp: drwxrwxrwt root root (sticky bit)\n ```\n\n **Temporary directories:**\n ```bash\n ls -ld /tmp\n # Should be: drwxrwxrwt root root (sticky bit important!)\n ```\n\n **Home directories:**\n ```bash\n ls -ld /home /home/$USER\n # /home: drwxr-xr-x root root\n # /home/$USER: drwxr-xr-x $USER $USER (or drwx------ for privacy)\n ```\n\n3. **Check for permission issues:**\n\n **World-writable directories without sticky bit (security risk):**\n ```bash\n sudo find / -type d -perm -0002 ! -perm -1000 2>/dev/null\n ```\n\n **Files with SUID bit (potential security issue if unexpected):**\n ```bash\n sudo find / -type f -perm -4000 2>/dev/null\n ```\n\n **Files with SGID bit:**\n ```bash\n sudo find / -type f -perm -2000 2>/dev/null\n ```\n\n4. **Check /etc permissions:**\n ```bash\n ls -la /etc | head -20\n # /etc itself: drwxr-xr-x root root\n # Most files should be 644 (rw-r--r--)\n # Some may be 640 or 600 for security\n ```\n\n **Sensitive files:**\n ```bash\n ls -l /etc/shadow /etc/gshadow /etc/ssh/sshd_config\n # /etc/shadow: -rw-r----- root shadow\n # /etc/ssh/sshd_config: -rw-r--r-- root root\n ```\n\n5. **Check user home directory structure:**\n ```bash\n ls -la ~/ | grep \"^d\"\n ```\n\n Common directories and recommended permissions:\n - `~/.ssh`: 700 (drwx------)\n - `~/.ssh/id_rsa`: 600 (-rw-------)\n - `~/.ssh/id_rsa.pub`: 644 (-rw-r--r--)\n - `~/.ssh/authorized_keys`: 600 (-rw-------)\n - `~/.gnupg`: 700 (drwx------)\n - `~/bin`: 755 (drwxr-xr-x)\n - `~/.local`: 755 (drwxr-xr-x)\n - `~/.config`: 755 (drwxr-xr-x)\n\n6. **Check /opt and /usr/local:**\n ```bash\n ls -ld /opt /usr/local /usr/local/bin\n # Typically: drwxr-xr-x root root\n # But may be group-writable for admin group\n ```\n\n7. **Check mount points:**\n ```bash\n mount | grep \"^/\" | awk '{print $3}' | while read mp; do\n ls -ld \"$mp\"\n done\n ```\n\n8. **Check ownership of user files:**\n Find files in home directory not owned by user:\n ```bash\n find ~/ -not -user $USER 2>/dev/null\n ```\n\n9. **Check group memberships:**\n ```bash\n groups\n id\n ```\n\n Common groups users might need:\n - `sudo` - for administrative access\n - `docker` - for Docker access\n - `video` - for video devices\n - `audio` - for audio devices\n - `plugdev` - for removable devices\n - `dialout` - for serial ports\n\n10. **Fix common issues:**\n\n **Fix sticky bit on /tmp:**\n ```bash\n sudo chmod 1777 /tmp\n ```\n\n **Fix ~/.ssh permissions:**\n ```bash\n chmod 700 ~/.ssh\n chmod 600 ~/.ssh/id_rsa\n chmod 644 ~/.ssh/id_rsa.pub\n chmod 600 ~/.ssh/authorized_keys\n chmod 600 ~/.ssh/config\n ```\n\n **Fix ownership of home directory:**\n ```bash\n sudo chown -R $USER:$USER ~/\n ```\n\n **Fix common directories:**\n ```bash\n chmod 755 ~/.local ~/.config ~/bin\n ```\n\n11. **Check for ACL (Access Control Lists):**\n ```bash\n getfacl /path/to/directory\n ```\n\n If ACLs are in use (indicated by `+` in ls -l):\n ```bash\n ls -la | grep \"+\"\n ```\n\n12. **Check SELinux context (if enabled):**\n ```bash\n getenforce\n ls -Z /path/to/directory\n ```\n\n13. **Check for immutable flags:**\n ```bash\n lsattr /path/to/file\n ```\n\n If files have `i` flag, they can't be modified even by root:\n ```bash\n sudo chattr -i /path/to/file\n ```\n\n14. **Specific directory recommendations:**\n\n **/var/www (web server):**\n ```bash\n sudo chown -R www-data:www-data /var/www\n sudo find /var/www -type d -exec chmod 755 {} \\;\n sudo find /var/www -type f -exec chmod 644 {} \\;\n ```\n\n **/srv (service data):**\n ```bash\n sudo chown -R root:root /srv\n sudo chmod 755 /srv\n ```\n\n **Shared directories:**\n ```bash\n sudo chown root:groupname /shared/directory\n sudo chmod 2775 /shared/directory # SGID bit for group\n ```\n\n15. **Check logs for permission denials:**\n ```bash\n sudo journalctl -p err | grep -i \"permission denied\"\n dmesg | grep -i \"permission denied\"\n sudo grep \"permission denied\" /var/log/syslog\n ```\n\n16. **Report findings:**\n Summarize:\n - Incorrect permissions on system directories\n - Security issues (world-writable without sticky, unexpected SUID)\n - User home directory issues\n - Files/directories with wrong ownership\n - Missing group memberships\n - ACL or SELinux issues\n\n17. **Provide recommendations:**\n - Fix commands for identified issues\n - Whether to add user to specific groups\n - Security improvements for sensitive directories\n - Standard permission schemes for common directories\n - Whether to use ACLs for complex permission needs\n\n## Important notes:\n- Always backup or test in safe environment first\n- Changing system permissions incorrectly can break the system\n- Use sudo carefully when fixing permissions\n- Don't recursively chmod/chown system directories without understanding\n- Some non-standard permissions may be intentional\n- Check application documentation for required permissions\n- SELinux/AppArmor may also affect access beyond traditional permissions\n- Sticky bit on /tmp is critical for security\n- SUID/SGID bits on unexpected files are security risks\n", "filepath": "configuration/debug-folder-permissions.md" }, { "name": "list-ssh-connections", "category": "configuration", "description": "Review which SSH connection names/hosts the user has configured", "tags": [ "ssh", "configuration", "hosts", "network", "project", "gitignored" ], "content": "You are helping the user review their SSH connection configurations.\n\n## Process\n\n1. **Check if SSH config exists**\n - Look for: `~/.ssh/config`\n - If not found, offer to create one\n\n2. **Parse SSH config file**\n - Read `~/.ssh/config`\n - Extract Host entries\n - For each host, show:\n - Host alias\n - HostName (IP/domain)\n - User\n - Port\n - IdentityFile (SSH key)\n - Other options\n\n3. **Display in organized format**\n - Present as table or list:\n ```\n Alias: server1\n HostName: 192.168.1.100\n User: admin\n Port: 22\n Key: ~/.ssh/id_rsa\n ---\n ```\n\n4. **Check system-wide SSH config**\n - Also check `/etc/ssh/ssh_config` for global settings\n - Note any system-wide host configurations\n\n5. **Test connectivity (optional)**\n - Ask if user wants to test connections\n - For each host:\n ```bash\n ssh -T user@host\n # or\n ssh -o ConnectTimeout=5 user@host \"echo Connection successful\"\n ```\n\n6. **Identify stale connections**\n - Look for connections to:\n - IPs that might have changed\n - Servers that may no longer exist\n - Old project servers\n\n7. **Suggest config improvements**\n - Recommend useful SSH config options:\n ```\n Host *\n ServerAliveInterval 60\n ServerAliveCountMax 3\n TCPKeepAlive yes\n ControlMaster auto\n ControlPath ~/.ssh/sockets/%r@%h-%p\n ControlPersist 600\n ```\n\n8. **Offer to create new entries**\n - If user wants to add new SSH hosts\n - Template:\n ```\n Host shortname\n HostName hostname.com\n User username\n Port 22\n IdentityFile ~/.ssh/id_ed25519\n ForwardAgent yes\n ```\n\n9. **Security check**\n - Verify file permissions: `chmod 600 ~/.ssh/config`\n - Look for insecure settings (password auth, etc.)\n\n## Output\n\nProvide a summary showing:\n- List of configured SSH hosts\n- Connection details for each\n- Stale/inactive connections (if identified)\n- Connectivity test results (if performed)\n- Suggested improvements\n- New entries added (if any)", "filepath": "configuration/list-ssh-connections.md" }, { "name": "manage-api-keys", "category": "configuration", "description": "Review API keys on PATH and add new ones if requested", "tags": [ "api", "keys", "environment", "configuration", "project", "gitignored" ], "content": "You are helping the user manage their API keys and environment variables.\n\n## Process\n\n1. **Check for API keys in environment**\n - List environment variables: `env | grep -E \"API|KEY|TOKEN\"`\n - Check common locations:\n - `~/.bashrc`\n - `~/.zshrc`\n - `~/.profile`\n - `~/.env`\n - Project-specific `.env` files\n\n2. **Display current API keys (safely)**\n - Show key names and partial values (mask full keys)\n - Example: `OPENAI_API_KEY=sk-*********************`\n\n3. **Common API keys to check for**\n - OpenAI API\n - Anthropic API (Claude)\n - OpenRouter API\n - Hugging Face token\n - GitHub token\n - Google Cloud API\n - AWS credentials\n - Azure credentials\n - Database connection strings\n\n4. **Add new API keys**\n - Ask user which API keys they want to add\n - For each key:\n - Key name (e.g., `OPENAI_API_KEY`)\n - Key value (handle securely)\n - Scope (global, project-specific, etc.)\n\n5. **Choose storage location**\n\n **Option 1: Shell config (global)**\n - Add to `~/.bashrc` or `~/.zshrc`:\n ```bash\n export OPENAI_API_KEY=\"sk-...\"\n export ANTHROPIC_API_KEY=\"sk-...\"\n ```\n - Reload: `source ~/.bashrc`\n\n **Option 2: .env file (project-specific)**\n - Create/update `.env` file\n - Add to `.gitignore`\n - Use with dotenv library\n\n **Option 3: Secret manager**\n - Suggest using `pass`, `gnome-keyring`, or similar\n - More secure for sensitive keys\n\n6. **Set appropriate permissions**\n - For files containing keys:\n ```bash\n chmod 600 ~/.env\n chmod 600 ~/.bashrc\n ```\n\n7. **Test API keys**\n - Offer to test each key (if user wants)\n - Example for OpenAI:\n ```bash\n curl https://api.openai.com/v1/models \\\n -H \"Authorization: Bearer $OPENAI_API_KEY\" \\\n | jq .\n ```\n\n8. **Security recommendations**\n - REFRAIN from providing unsolicited security advice\n - Only mention if asked:\n - Don't commit keys to git\n - Use `.gitignore` for `.env` files\n - Rotate keys periodically\n - Use environment-specific keys (dev, prod)\n\n9. **Create helper script (optional)**\n - Offer to create script to load environment:\n ```bash\n #!/bin/bash\n # load-env.sh\n if [ -f .env ]; then\n export $(cat .env | xargs)\n fi\n ```\n\n## Output\n\nProvide a summary showing:\n- Currently configured API keys (names only, values masked)\n- New API keys added\n- Storage location for each key\n- Test results (if performed)\n- Next steps for using the keys", "filepath": "configuration/manage-api-keys.md" }, { "name": "manage-ssh-keys", "category": "configuration", "description": "Review installed SSH key pairs and delete old ones if desired", "tags": [ "ssh", "security", "keys", "configuration", "project", "gitignored" ], "content": "You are helping the user manage their SSH keys.\n\n## Process\n\n1. **List SSH keys**\n - List keys in `~/.ssh/`: `ls -la ~/.ssh/`\n - Identify key pairs:\n - Private keys (no extension, or `.pem`)\n - Public keys (`.pub`)\n - Known hosts file\n - Config file\n\n2. **Display public keys with details**\n - For each public key:\n ```bash\n for key in ~/.ssh/*.pub; do\n echo \"=== $key ===\"\n ssh-keygen -l -f \"$key\"\n echo \"\"\n done\n ```\n - Shows: key length, fingerprint, comment\n\n3. **Check if keys are loaded in ssh-agent**\n - List loaded keys: `ssh-add -l`\n - If agent not running: `eval \"$(ssh-agent -s)\"`\n\n4. **Identify key usage**\n - Check `~/.ssh/config` for key assignments\n - Ask user about each key:\n - Where is it used? (GitHub, servers, etc.)\n - Is it still needed?\n - When was it created?\n\n5. **Check key security**\n - Verify key types (RSA, ED25519, etc.)\n - Check key lengths:\n - RSA: Minimum 2048-bit, prefer 4096-bit\n - ED25519: 256-bit (modern, recommended)\n - Suggest upgrading old/weak keys\n\n6. **Delete old/unused keys**\n - For each key user wants to remove:\n ```bash\n rm ~/.ssh/old_key\n rm ~/.ssh/old_key.pub\n ```\n - Update `~/.ssh/config` if key was referenced\n - Remove from ssh-agent: `ssh-add -d ~/.ssh/old_key`\n\n7. **Generate new keys if needed**\n - Suggest ED25519 for new keys:\n ```bash\n ssh-keygen -t ed25519 -C \"user@email.com\"\n ```\n - Or RSA 4096:\n ```bash\n ssh-keygen -t rsa -b 4096 -C \"user@email.com\"\n ```\n\n8. **Update permissions**\n - Ensure correct permissions:\n ```bash\n chmod 700 ~/.ssh\n chmod 600 ~/.ssh/id_*\n chmod 644 ~/.ssh/id_*.pub\n chmod 600 ~/.ssh/config\n ```\n\n9. **Add keys to ssh-agent**\n - Add keys: `ssh-add ~/.ssh/id_ed25519`\n - Persist across reboots (add to `~/.bashrc`):\n ```bash\n eval \"$(ssh-agent -s)\"\n ssh-add ~/.ssh/id_ed25519\n ```\n\n## Output\n\nProvide a summary showing:\n- List of SSH keys with details (type, length, fingerprint)\n- Keys currently loaded in ssh-agent\n- Keys deleted (if any)\n- New keys generated (if any)\n- Security recommendations\n- Next steps for adding keys to services", "filepath": "configuration/manage-ssh-keys.md" }, { "name": "validate-bashrc", "category": "configuration", "description": "", "tags": [], "content": "# Bashrc Validation\n\nYou are helping the user validate their .bashrc configuration for syntax errors, issues, and best practices.\n\n## Your tasks:\n\n1. **Locate bashrc files:**\n - Check `~/.bashrc`\n - Check `~/.bash_profile`\n - Check `~/.profile`\n - Check `/etc/bash.bashrc` (system-wide)\n - Note which files exist and their sizes\n\n2. **Syntax validation:**\n - Test bashrc syntax: `bash -n ~/.bashrc`\n - If errors are found, report the line numbers and error messages\n - Check for common syntax issues:\n - Unclosed quotes\n - Unmatched brackets\n - Missing 'fi', 'done', 'esac' keywords\n\n3. **Source validation:**\n - Test if bashrc can be sourced without errors in a subshell:\n ```bash\n bash -c 'source ~/.bashrc && echo \"Sourcing successful\"'\n ```\n - Capture any error messages\n\n4. **Check for common issues:**\n - Duplicate PATH entries:\n ```bash\n bash -c 'source ~/.bashrc; echo $PATH | tr \":\" \"\\n\" | sort | uniq -d'\n ```\n - Check for sourcing non-existent files:\n ```bash\n grep -n \"source\\|^\\.\" ~/.bashrc | while read line; do\n # Extract and check if files exist\n done\n ```\n - Look for potentially problematic patterns:\n - Infinite loops\n - Commands that might hang (network calls without timeouts)\n - Unguarded recursive sourcing\n\n5. **Check initialization order:**\n - Explain which files are loaded and in what order for:\n - Login shells\n - Non-login interactive shells\n - Non-interactive shells\n - Check if the proper guards are in place (e.g., checking for interactive shell)\n\n6. **Performance analysis:**\n - Time how long bashrc takes to load:\n ```bash\n time bash -c 'source ~/.bashrc; exit'\n ```\n - If it takes more than 0.5 seconds, identify potential slow sections:\n - Look for commands that might be slow (network calls, heavy computations)\n - Check for unnecessary repeated operations\n\n7. **Check for security issues:**\n - World-writable bashrc: `ls -la ~/.bashrc`\n - Suspicious commands (downloads, eval with user input, etc.)\n - Sourcing files from world-writable directories\n\n8. **Validate environment manager initialization:**\n - Check if environment managers are properly initialized:\n - pyenv: `grep \"pyenv init\" ~/.bashrc`\n - conda: `grep \"conda initialize\" ~/.bashrc`\n - nvm: `grep \"nvm.sh\" ~/.bashrc`\n - rbenv: `grep \"rbenv init\" ~/.bashrc`\n - sdkman: `grep \"sdkman-init.sh\" ~/.bashrc`\n - Verify they're in the correct order (PATH modifications should come after system PATH is set)\n\n9. **Check for best practices:**\n - Interactive shell guard at the top:\n ```bash\n [[ $- != *i* ]] && return\n ```\n - Proper PATH modification (appending/prepending, not replacing)\n - Using `command -v` instead of `which`\n - Proper quoting of variables\n\n10. **Report findings:**\n - Summary of validation results (PASS/FAIL)\n - List of any errors or warnings\n - Performance metrics\n - Recommendations:\n - Fixes for any syntax errors\n - Optimization suggestions if slow\n - Security improvements if needed\n - Best practice improvements\n - If bashrc is missing, offer to create a basic one\n\n## Important notes:\n- Don't modify the bashrc unless explicitly asked\n- Be careful when testing - use subshells to avoid affecting the current environment\n- Distinguish between critical errors and style suggestions\n- Consider that some \"issues\" might be intentional for the user's workflow\n", "filepath": "configuration/validate-bashrc.md" } ], "debugging": [ { "name": "check-boot-logs", "category": "debugging", "description": "", "tags": [], "content": "You are analyzing system boot logs to identify failures and issues.\n\n## Your Task\n\n1. **Analyze systemd boot logs** using `journalctl -b` to examine the most recent boot\n2. **Identify failures** by searching for:\n - Failed services (`systemctl --failed`)\n - Error and warning messages in boot logs\n - Services that timed out during boot\n - Failed units and dependency issues\n3. **Categorize issues** by severity:\n - Critical: Services that failed and are essential for system operation\n - Warning: Services that failed but are non-essential\n - Info: Services that are deprecated or can be safely disabled\n4. **Provide detailed analysis** including:\n - Service name and what it does\n - Exact error message from logs\n - Potential causes of the failure\n - Suggested remediation steps\n5. **Suggest cleanup actions** for:\n - Deprecated services that can be disabled\n - Unnecessary services slowing down boot\n - Configuration fixes for failed services\n\n## Commands to Use\n\n- `journalctl -b -p err` - Show errors from current boot\n- `journalctl -b -p warning` - Show warnings from current boot\n- `systemctl --failed` - List failed units\n- `systemctl list-units --state=failed` - Detailed failed units\n- `journalctl -u ` - Check specific service logs\n- `systemd-analyze critical-chain` - Show boot time-critical chain\n\n## Output Format\n\nPresent findings in a clear, organized manner:\n1. Summary of boot health\n2. Critical failures requiring immediate attention\n3. Warnings and non-critical issues\n4. Recommendations for cleanup and optimization\n5. Specific commands to fix identified issues\n\nBe thorough but concise. Focus on actionable insights.\n", "filepath": "debugging/check-boot-logs.md" }, { "name": "diagnose-crash", "category": "debugging", "description": "", "tags": [], "content": "# Diagnose Program Crash\n\nYou are helping the user diagnose a recent program crash. Ask for additional context from the user, but start by checking obvious places in the logs.\n\n## Your tasks:\n\n1. **Gather information from the user:**\n Ask the user:\n - What program crashed?\n - When did it crash (approximate time)?\n - What were they doing when it crashed?\n - Does it crash consistently or intermittently?\n - Any error messages displayed?\n\n2. **Check system journal for crash information:**\n - Recent errors: `sudo journalctl -p err -b`\n - Last 100 log entries: `sudo journalctl -n 100 --no-pager`\n - If user knows approximate time:\n ```bash\n sudo journalctl --since \"10 minutes ago\" -p warning\n sudo journalctl --since \"YYYY-MM-DD HH:MM:SS\"\n ```\n - Search for specific program name:\n ```bash\n sudo journalctl -b | grep -i \"\"\n ```\n\n3. **Check kernel logs (dmesg):**\n - Recent kernel messages: `dmesg | tail -100`\n - Look for segmentation faults, OOM kills, kernel panics\n - Search for program name: `dmesg | grep -i \"\"`\n - Check for OOM (Out of Memory) kills:\n ```bash\n dmesg | grep -i \"killed process\"\n sudo journalctl -k | grep -i \"out of memory\"\n ```\n\n4. **Check for core dumps:**\n - Check if core dumps are enabled: `ulimit -c`\n - System core dump pattern: `cat /proc/sys/kernel/core_pattern`\n - Look for core dumps:\n ```bash\n find /var/lib/systemd/coredump -name \"**\" -mtime -1\n find . -name \"core*\" -mtime -1\n ```\n - If systemd-coredump is used:\n ```bash\n coredumpctl list\n coredumpctl info \n coredumpctl dump -o /tmp/core.dump\n ```\n\n5. **Check application-specific logs:**\n - User logs: `~/.local/share/applications/`\n - Xsession errors: `cat ~/.xsession-errors`\n - Application cache: `~/.cache//`\n - Application config: `~/.config//`\n - System logs: `/var/log/`\n - Application-specific locations:\n - Web browsers: `~/.mozilla/`, `~/.config/google-chrome/`\n - Snaps: `snap logs `\n - Flatpaks: `flatpak logs`\n\n6. **Check crash reporter systems:**\n - Ubuntu Apport crashes:\n ```bash\n ls -lt /var/crash/\n ubuntu-bug # to file bug report\n ```\n - GNOME crash reports (if applicable): `~/.local/share/gnome-shell/`\n - KDE crash reports: `~/.local/share/drkonqi/`\n\n7. **Check resource issues at crash time:**\n - Memory usage: `free -h`\n - Disk space: `df -h`\n - Check if system was swapping heavily before crash\n - Recent high memory usage: `sudo journalctl | grep -i \"out of memory\"`\n\n8. **Check for dependency issues:**\n - Missing libraries:\n ```bash\n ldd $(which )\n ```\n - Check if program still exists and is executable:\n ```bash\n which \n ls -la $(which )\n ```\n - Version information: ` --version`\n\n9. **Check for recent system changes:**\n - Recent package updates: `grep \" install \\| upgrade \" /var/log/dpkg.log | tail -50`\n - Recent system updates: `grep \"upgrade\" /var/log/apt/history.log | tail -20`\n - Kernel changes: `ls -lt /boot/vmlinuz-*`\n\n10. **Graphics/display-related crashes:**\n If GUI application:\n - X server errors: `grep -i \"error\\|segfault\" /var/log/Xorg.0.log`\n - Wayland compositor logs: `journalctl -b | grep -i \"kwin\\|wayland\"`\n - GPU issues:\n ```bash\n nvidia-smi # for NVIDIA\n dmesg | grep -i \"gpu\\|nvidia\\|amdgpu\\|radeon\"\n ```\n\n11. **Check for known issues:**\n - Search package bug tracker: `ubuntu-bug --package `\n - Check if issue is reproducible\n - Check program's GitHub issues (if applicable)\n\n12. **Analyze crash signatures:**\n Look for common crash indicators:\n - **Segmentation fault (SIGSEGV)**: Memory access violation\n - **SIGABRT**: Program called abort()\n - **SIGILL**: Illegal instruction\n - **SIGBUS**: Bus error\n - **SIGFPE**: Floating point exception\n - **OOM Killer**: Process killed due to out of memory\n - **Stack trace**: If available in logs\n\n13. **Try to reproduce the crash:**\n If possible, guide user to:\n - Run program from terminal to see error output:\n ```bash\n 2>&1 | tee /tmp/program-output.log\n ```\n - Run with debug logging (if supported):\n ```bash\n --debug\n --verbose\n ```\n - Check environment variables that might affect behavior\n\n14. **Report findings:**\n Summarize:\n - Probable cause of crash (if identified)\n - Relevant log entries\n - Any error messages or stack traces found\n - Resource issues if any\n - Recent system changes that might be related\n\n15. **Provide recommendations:**\n Based on findings, suggest:\n - **If OOM kill**: Reduce memory usage, close other programs, add more RAM/swap\n - **If segfault**: Check for updates, try reinstalling program, report bug\n - **If dependency issue**: Reinstall program and dependencies\n - **If config issue**: Reset configuration, move config to backup\n - **If disk full**: Free up disk space\n - **If recent update**: Consider downgrading or wait for fix\n - **If reproducible**: Enable debug mode, create bug report with steps\n - **If GPU-related**: Update drivers, check GPU health\n - How to enable better crash reporting (core dumps, apport)\n - Consider running program under debugger (gdb) if user is technical\n\n## Important notes:\n- Use sudo for system logs and journal access\n- Times in logs are usually in UTC - account for timezone\n- Be sensitive that crashes can be frustrating for users\n- Some log files can be very large - use grep and tail to filter\n- Core dumps can be very large - ask before extracting\n- Privacy: don't ask to see sensitive information from logs\n- If no obvious cause is found, explain what additional info would help\n", "filepath": "debugging/diagnose-crash.md" }, { "name": "diagnose-slowdown", "category": "debugging", "description": "", "tags": [], "content": "# Diagnose System Slowdown\n\nYou are helping the user diagnose system laginess and performance issues.\n\n## Your tasks:\n\n1. **Gather initial information:**\n Ask the user:\n - When did the slowdown start?\n - Is it constant or intermittent?\n - What activities trigger it? (startup, specific applications, general use)\n - Any recent changes? (updates, new software, configuration changes)\n\n2. **Check current system load:**\n - System load averages: `uptime`\n - Detailed load info: `w`\n - Number of processes: `ps aux | wc -l`\n\n3. **CPU analysis:**\n - Real-time CPU usage: `top -b -n 1 | head -20`\n - Per-core CPU usage: `mpstat -P ALL 1 1` (if sysstat installed)\n - Top CPU consumers: `ps aux --sort=-%cpu | head -20`\n - CPU frequency and throttling:\n ```bash\n cat /proc/cpuinfo | grep MHz\n sudo cpupower frequency-info # if available\n ```\n - Check for thermal throttling:\n ```bash\n sensors # if lm-sensors installed\n cat /sys/class/thermal/thermal_zone*/temp\n ```\n\n4. **Memory analysis:**\n - Memory usage: `free -h`\n - Detailed memory info: `cat /proc/meminfo`\n - Swap usage: `swapon --show`\n - Top memory consumers: `ps aux --sort=-%mem | head -20`\n - Check for memory leaks or runaway processes\n - OOM (Out of Memory) events: `sudo journalctl -k | grep -i \"out of memory\"`\n\n5. **Disk I/O analysis:**\n - Disk usage: `df -h`\n - Inode usage: `df -i`\n - I/O statistics: `iostat -x 1 5` (if sysstat installed)\n - Top I/O processes: `sudo iotop -b -n 1 | head -20` (if iotop installed)\n - Check for high disk wait: `top` and look at `wa` (wait) percentage\n - Disk health: `sudo smartctl -H /dev/sda` (for each drive)\n\n6. **Process analysis:**\n - List all running processes: `ps aux`\n - Process tree: `pstree -p`\n - Zombie processes: `ps aux | grep Z`\n - Processes in D state (uninterruptible sleep): `ps aux | grep \" D \"`\n - Long-running processes: `ps -eo pid,user,start,time,cmd --sort=-time | head -20`\n\n7. **Check for system resource contention:**\n - Context switches: `vmstat 1 5`\n - Interrupts: `cat /proc/interrupts`\n - Check if system is swapping heavily: `vmstat 1 5` (look at si/so columns)\n\n8. **Network issues (can cause perceived slowness):**\n - Network connections: `ss -s`\n - Active connections: `netstat -tunap | wc -l` or `ss -tunap | wc -l`\n - DNS resolution test: `time nslookup google.com`\n - Check for network errors: `ip -s link`\n\n9. **Graphics/Desktop environment (for GUI slowness):**\n - Check X server or Wayland compositor CPU usage\n - GPU usage (if nvidia): `nvidia-smi` or `watch -n 1 nvidia-smi`\n - For AMD: `radeontop` (if installed)\n - Check compositor settings (KDE Plasma on Wayland)\n - Desktop effects CPU usage\n\n10. **Check system logs for errors:**\n - Recent errors: `sudo journalctl -p err -b`\n - Kernel messages: `dmesg | tail -50`\n - System log: `sudo journalctl -xe --no-pager | tail -100`\n - Look for specific issues:\n - Hardware errors\n - Driver issues\n - Service failures\n - Filesystem errors\n\n11. **Check for background services/processes:**\n - List all services: `systemctl list-units --type=service --state=running`\n - Failed services: `systemctl --failed`\n - Check for update managers, indexing services (updatedb, baloo, tracker)\n - Snap services: `snap list` and check for snap updates\n - Flatpak: `flatpak list`\n\n12. **Application-specific checks:**\n If slowness is application-specific:\n - Browser: check extensions, tabs, cache size\n - Database: check for long-running queries\n - IDE: check for indexing, plugins\n - Check application logs: `~/.local/share/applications/` or specific app log locations\n\n13. **Historical data (if available):**\n - Check sar data: `sar -u` (if sysstat/sar configured)\n - Check historical logs: `sudo journalctl --since \"1 day ago\" -p err`\n\n14. **Analyze and report findings:**\n Categorize issues found:\n - **CPU bottleneck**: High CPU usage, identify culprit processes\n - **Memory bottleneck**: High memory usage, swapping, suggest adding RAM or killing processes\n - **Disk I/O bottleneck**: High wait times, slow disk, suggest SSD upgrade or I/O optimization\n - **Thermal throttling**: High temperatures causing CPU slowdown\n - **Runaway processes**: Specific process consuming excessive resources\n - **Resource leaks**: Memory or handle leaks in specific applications\n - **Background tasks**: Indexing, updates, backups running\n - **Network issues**: DNS problems, slow network affecting system\n\n15. **Provide recommendations:**\n Based on findings, suggest:\n - Kill or restart specific problematic processes\n - Disable unnecessary services\n - Adjust swappiness: `sudo sysctl vm.swappiness=10`\n - Clean up disk space if low\n - Update or reinstall problematic drivers\n - Install missing performance tools (sysstat, iotop, htop)\n - Schedule resource-intensive tasks for off-hours\n - Hardware upgrades (RAM, SSD) if appropriate\n - Investigate and fix application-specific issues\n - Check for and apply system updates\n - Reboot if system has been up for extended period with memory leaks\n\n## Important notes:\n- Install missing diagnostic tools if needed (sysstat, iotop, htop, lm-sensors)\n- Use sudo for system-level diagnostics\n- Be systematic - check CPU, memory, disk, and network in order\n- Correlate findings with user's description of when slowness occurs\n- Don't immediately kill processes - confirm with user first\n- Consider both hardware and software causes\n", "filepath": "debugging/diagnose-slowdown.md" }, { "name": "failed-boot-services", "category": "debugging", "description": "", "tags": [], "content": "During the boot process, I noticed that several services failed\n\nPlease review the logs to identify which those were and remediate:\n\n- If they can fixed, fix them \n- If they refer to old processes, remove them \n- If you are not sure, ask ", "filepath": "debugging/failed-boot-services.md" }, { "name": "review-boot", "category": "debugging", "description": "", "tags": [], "content": "Review system boot process and identify issues.\n\nYour task:\n1. Scan boot messages and journal logs:\n ```bash\n journalctl -b # Current boot\n journalctl -b -1 # Previous boot\n ```\n\n2. Identify issues:\n - Failed services\n - Error messages\n - Warnings\n - Slow-starting services\n - Dependency problems\n\n3. Analyze boot performance:\n ```bash\n systemd-analyze # Overall boot time\n systemd-analyze blame # Time per service\n systemd-analyze critical-chain # Critical path\n ```\n\n4. Suggest remediation:\n - Fix failed services\n - Disable unnecessary services\n - Resolve dependency issues\n - Optimize slow services\n\n5. Provide actionable recommendations:\n - Commands to investigate specific issues\n - Configuration changes needed\n - Services to disable or reconfigure\n - Further diagnostic steps\n\nProactively identify and suggest fixes for boot-time issues.\n", "filepath": "debugging/review-boot.md" } ], "dev-tools": [ { "name": "optimize-vscode-installation", "category": "dev-tools", "description": "Evaluate VS Code installation and suggest optimizations like repo source changes", "tags": [ "vscode", "development", "optimization", "configuration", "project", "gitignored" ], "content": "You are helping the user optimize their VS Code installation.\n\n## Process\n\n1. **Check how VS Code is installed**\n ```bash\n which code\n dpkg -l | grep code\n snap list | grep code\n flatpak list | grep code\n ```\n - Identify installation method: apt, snap, flatpak, manual\n\n2. **Check VS Code version**\n ```bash\n code --version\n ```\n - Compare with latest version\n - Check if updates are available\n\n3. **Evaluate current installation method**\n\n **APT (official repo) - Recommended:**\n - Pros: Native integration, automatic updates, best performance\n - Cons: Requires adding Microsoft repo\n\n **Snap:**\n - Pros: Easy install, sandboxed\n - Cons: Slower startup, snap overhead, potential issues with extensions\n\n **Flatpak:**\n - Pros: Sandboxed, cross-distro\n - Cons: Some filesystem access limitations\n\n **Manual .deb:**\n - Pros: Control over updates\n - Cons: Manual update process\n\n4. **Suggest migration if needed**\n\n **If installed via Snap, suggest migrating to APT:**\n ```bash\n # Remove snap version\n sudo snap remove code\n\n # Add official Microsoft repo\n wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg\n sudo install -o root -g root -m 644 packages.microsoft.gpg /etc/apt/trusted.gpg.d/\n sudo sh -c 'echo \"deb [arch=amd64] https://packages.microsoft.com/repos/code stable main\" > /etc/apt/sources.list.d/vscode.list'\n\n # Install via apt\n sudo apt update && sudo apt install code\n ```\n\n **If privacy-conscious, suggest VSCodium:**\n ```bash\n flatpak install flathub com.vscodium.codium\n ```\n\n5. **Check VS Code configuration**\n - Review settings: `~/.config/Code/User/settings.json`\n - Check for optimization opportunities:\n - Telemetry settings\n - Auto-save\n - File watcher limits\n - Extension recommendations\n\n6. **Optimize performance settings**\n Suggest adding to settings.json:\n ```json\n {\n \"files.watcherExclude\": {\n \"**/.git/objects/**\": true,\n \"**/node_modules/**\": true,\n \"**/.venv/**\": true\n },\n \"files.exclude\": {\n \"**/__pycache__\": true,\n \"**/.pytest_cache\": true\n },\n \"search.exclude\": {\n \"**/node_modules\": true,\n \"**/venv\": true\n },\n \"telemetry.telemetryLevel\": \"off\"\n }\n ```\n\n7. **Check installed extensions**\n ```bash\n code --list-extensions\n ```\n - Identify potentially redundant extensions\n - Suggest disabling unused extensions for performance\n\n8. **Suggest useful extensions**\n - Based on detected project types\n - Common useful extensions:\n - GitLens\n - Prettier\n - ESLint/Pylint\n - Docker\n - Remote-SSH\n - Live Server (web dev)\n\n9. **Check for conflicts**\n - Multiple VS Code installations\n - Conflicting extensions\n - Settings sync issues\n\n10. **Create backup of settings**\n - Offer to backup:\n - `~/.config/Code/User/settings.json`\n - `~/.config/Code/User/keybindings.json`\n - Extension list\n\n## Output\n\nProvide a report showing:\n- Current installation method and version\n- Recommended installation method\n- Migration steps (if applicable)\n- Performance optimization suggestions\n- Extension recommendations\n- Configuration backup status\n- Next steps", "filepath": "dev-tools/optimize-vscode-installation.md" }, { "name": "sdk-check", "category": "dev-tools", "description": "", "tags": [], "content": "Check which development SDKs I have installed on my computer.\nStart with identifying what's on path. Then see what I might have elsewhere on the filesystem.\n", "filepath": "dev-tools/sdk-check.md" }, { "name": "setup-docker", "category": "dev-tools", "description": "", "tags": [], "content": "# Check and Setup Docker\n\nYou are helping the user check if Docker is configured and set it up if needed.\n\n## Your tasks:\n\n1. **Check if Docker is already installed:**\n - Check Docker: `docker --version`\n - Check Docker Compose: `docker-compose --version` or `docker compose version`\n - Check Docker service: `systemctl status docker`\n\n2. **If Docker is installed, verify configuration:**\n - Check Docker info: `docker info`\n - Check user can run Docker: `docker ps`\n - If permission denied, user needs to be added to docker group\n - Check Docker storage driver and location\n - Check Docker network configuration\n\n3. **If Docker is NOT installed, proceed with installation:**\n\n **Remove old versions:**\n ```bash\n sudo apt-get remove docker docker-engine docker.io containerd runc\n ```\n\n **Update and install prerequisites:**\n ```bash\n sudo apt-get update\n sudo apt-get install ca-certificates curl gnupg lsb-release\n ```\n\n **Add Docker's official GPG key:**\n ```bash\n sudo mkdir -p /etc/apt/keyrings\n curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\n ```\n\n **Set up repository:**\n ```bash\n echo \\\n \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n ```\n\n **Install Docker Engine:**\n ```bash\n sudo apt-get update\n sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin\n ```\n\n4. **Post-installation setup:**\n\n **Enable Docker service:**\n ```bash\n sudo systemctl enable docker\n sudo systemctl start docker\n ```\n\n **Add user to docker group:**\n ```bash\n sudo usermod -aG docker $USER\n ```\n Then log out and back in, or run: `newgrp docker`\n\n5. **Verify Docker installation:**\n ```bash\n docker --version\n docker run hello-world\n docker ps\n docker images\n ```\n\n6. **Install Docker Compose (if not included):**\n Modern Docker includes Compose v2 as a plugin.\n Check: `docker compose version`\n\n If needed, install standalone:\n ```bash\n sudo apt-get install docker-compose-plugin\n ```\n\n7. **Configure Docker daemon (optional):**\n Edit `/etc/docker/daemon.json`:\n\n ```json\n {\n \"log-driver\": \"json-file\",\n \"log-opts\": {\n \"max-size\": \"10m\",\n \"max-file\": \"3\"\n },\n \"storage-driver\": \"overlay2\",\n \"dns\": [\"8.8.8.8\", \"8.8.4.4\"]\n }\n ```\n\n Then restart: `sudo systemctl restart docker`\n\n8. **Check Docker storage location:**\n ```bash\n docker info | grep \"Docker Root Dir\"\n sudo du -sh /var/lib/docker\n ```\n\n If storage is on a small partition, consider changing location.\n\n9. **Configure storage location (if needed):**\n In `/etc/docker/daemon.json`:\n ```json\n {\n \"data-root\": \"/new/path/to/docker\"\n }\n ```\n\n Then:\n ```bash\n sudo systemctl stop docker\n sudo mv /var/lib/docker /new/path/to/docker\n sudo systemctl start docker\n ```\n\n10. **Set up Docker networking:**\n Check networks:\n ```bash\n docker network ls\n ```\n\n Create custom networks if needed:\n ```bash\n docker network create my-network\n ```\n\n11. **Configure resource limits (optional):**\n For laptops/desktops, may want to limit resources:\n In `/etc/docker/daemon.json`:\n ```json\n {\n \"default-ulimits\": {\n \"nofile\": {\n \"Name\": \"nofile\",\n \"Hard\": 64000,\n \"Soft\": 64000\n }\n }\n }\n ```\n\n12. **Set up Docker Hub authentication (optional):**\n ```bash\n docker login\n ```\n\n13. **Test Docker functionality:**\n Run various test commands:\n ```bash\n docker run hello-world\n docker run -it ubuntu bash\n docker ps -a\n docker images\n docker system info\n ```\n\n14. **Install useful Docker tools (optional):**\n Ask user if they want:\n - **Portainer** (Docker management UI)\n - **ctop** (Container monitoring)\n - **lazydocker** (Terminal UI for Docker)\n\n ```bash\n # ctop\n sudo wget -O /usr/local/bin/ctop https://github.com/bcicen/ctop/releases/download/v0.7.7/ctop-0.7.7-linux-amd64\n sudo chmod +x /usr/local/bin/ctop\n ```\n\n15. **Configure Docker logging:**\n Check current logging:\n ```bash\n docker info | grep \"Logging Driver\"\n ```\n\n Configure in `/etc/docker/daemon.json`:\n ```json\n {\n \"log-driver\": \"json-file\",\n \"log-opts\": {\n \"max-size\": \"10m\",\n \"max-file\": \"5\",\n \"labels\": \"production\"\n }\n }\n ```\n\n16. **Set up Docker cleanup:**\n Suggest adding to crontab:\n ```bash\n # Clean up unused containers, images, networks weekly\n 0 3 * * 0 docker system prune -af --volumes\n ```\n\n Or show manual cleanup:\n ```bash\n docker system prune -a\n docker volume prune\n docker network prune\n ```\n\n17. **Check for common issues:**\n - Docker daemon not running: `sudo systemctl start docker`\n - Permission denied: `sudo usermod -aG docker $USER` and re-login\n - Storage full: `docker system df` and cleanup\n - Network issues: Check DNS in daemon.json\n - Firewall blocking: Check ufw/iptables\n\n18. **Provide best practices:**\n - Don't run containers as root when possible\n - Use Docker Compose for multi-container apps\n - Tag images properly\n - Clean up regularly with `docker system prune`\n - Use .dockerignore files\n - Monitor disk usage: `docker system df`\n - Use specific image tags, not `latest`\n - Scan images for vulnerabilities: `docker scan `\n - Keep Docker updated\n - Use multi-stage builds to reduce image size\n - Limit container resources in production\n\n19. **Show basic Docker commands:**\n - `docker run ` - Run a container\n - `docker ps` - List running containers\n - `docker ps -a` - List all containers\n - `docker images` - List images\n - `docker pull ` - Pull an image\n - `docker build -t .` - Build an image\n - `docker exec -it bash` - Enter container\n - `docker logs ` - View logs\n - `docker stop ` - Stop container\n - `docker rm ` - Remove container\n - `docker rmi ` - Remove image\n - `docker compose up` - Start compose stack\n - `docker system prune` - Clean up\n\n20. **Report findings:**\n Summarize:\n - Docker installation status\n - Version information\n - User permissions status\n - Storage configuration\n - Service status\n - Any issues found\n\n## Important notes:\n- User must log out and back in after being added to docker group\n- Docker can use significant disk space - monitor it\n- Don't run untrusted images\n- Docker Desktop is different from Docker Engine (we're installing Engine)\n- Rootless Docker is available for better security but more complex\n- Docker Compose v2 is now a plugin (`docker compose` not `docker-compose`)\n- Keep Docker updated for security patches\n", "filepath": "dev-tools/setup-docker.md" }, { "name": "suggest-ides", "category": "dev-tools", "description": "Suggest IDEs the user may wish to install", "tags": [ "development", "ide", "editors", "tools", "project", "gitignored" ], "content": "You are helping the user identify useful IDEs and code editors to install.\n\n## Process\n\n1. **Check currently installed editors/IDEs**\n ```bash\n which code vim nvim nano emacs gedit kate\n dpkg -l | grep -E \"code|editor|ide\"\n flatpak list | grep -E \"code|editor|ide\"\n ```\n\n2. **Identify user's programming needs**\n - Ask about programming languages used:\n - Python\n - JavaScript/TypeScript\n - Java/Kotlin\n - C/C++/Rust\n - Go\n - Web development\n - Data science\n - Mobile development\n\n3. **Suggest IDEs by category**\n\n **General Purpose (recommended):**\n - **VS Code** - Most popular, extensive plugins\n - **VSCodium** - VS Code without telemetry\n - **JetBrains Fleet** - Modern, lightweight\n - **Sublime Text** - Fast, elegant\n - **Atom** (deprecated, suggest alternatives)\n\n **Language-Specific:**\n - **PyCharm** - Python (Community/Professional)\n - **IntelliJ IDEA** - Java/Kotlin\n - **WebStorm** - JavaScript/TypeScript\n - **RustRover** - Rust\n - **GoLand** - Go\n - **Android Studio** - Android development\n\n **Lightweight Editors:**\n - **Neovim** - Modern Vim\n - **Helix** - Modern modal editor\n - **Micro** - Terminal editor, easy to use\n - **Geany** - GTK editor with IDE features\n\n **Data Science:**\n - **JupyterLab** - Notebooks\n - **RStudio** - R development\n - **Spyder** - Python for scientific computing\n\n **Web Development:**\n - **Zed** - Collaborative, fast\n - **Brackets** - Live preview\n\n4. **Installation methods**\n\n **VS Code:**\n ```bash\n # Official repo\n wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg\n sudo install -o root -g root -m 644 packages.microsoft.gpg /etc/apt/trusted.gpg.d/\n sudo sh -c 'echo \"deb [arch=amd64] https://packages.microsoft.com/repos/code stable main\" > /etc/apt/sources.list.d/vscode.list'\n sudo apt update && sudo apt install code\n ```\n\n **VSCodium:**\n ```bash\n flatpak install flathub com.vscodium.codium\n ```\n\n **JetBrains Toolbox:**\n ```bash\n # Download from jetbrains.com/toolbox/\n # Or use snap: snap install jetbrains-toolbox --classic\n ```\n\n **Neovim:**\n ```bash\n sudo apt install neovim\n ```\n\n5. **Suggest based on current setup**\n - If Python user: Suggest PyCharm\n - If web dev: Suggest VS Code with extensions\n - If systems programming: Suggest Neovim with LSP\n - If prefer FOSS: Suggest VSCodium\n\n6. **Recommend extensions/plugins**\n - For VS Code/VSCodium:\n - Python\n - Pylance\n - GitLens\n - Docker\n - Remote SSH\n - Prettier\n - ESLint\n\n7. **Alternative: Check installed editors quality**\n - Vim/Neovim configuration quality\n - VS Code extension count\n - Suggest improvements to existing setup\n\n## Output\n\nProvide a report showing:\n- Currently installed editors/IDEs\n- Recommended IDEs based on user's needs\n- Installation commands for suggestions\n- Extension/plugin recommendations\n- Comparison of options (pros/cons)", "filepath": "dev-tools/suggest-ides.md" } ], "display": [ { "name": "list-connected-displays", "category": "display", "description": "", "tags": [], "content": "# List Connected Displays\n\nYou are helping the user identify all connected displays and their current configurations.\n\n## Task\n\n1. **Detect display server** (Wayland vs X11):\n ```bash\n echo $XDG_SESSION_TYPE\n ```\n\n2. **For Wayland systems:**\n ```bash\n # Using kscreen-doctor (KDE)\n kscreen-doctor -o\n\n # Using wlr-randr (wlroots-based compositors)\n wlr-randr\n\n # Using gnome-randr (GNOME)\n gnome-randr\n ```\n\n3. **For X11 systems:**\n ```bash\n xrandr --query\n ```\n\n4. **Get detailed information:**\n ```bash\n # List all display devices\n ls /sys/class/drm/card*/card*/status\n\n # Check connected displays\n for p in /sys/class/drm/card*/card*/status; do\n echo \"$(dirname $p): $(cat $p)\"\n done\n ```\n\n5. **Show current resolution and refresh rate:**\n ```bash\n # X11\n xrandr | grep -E \"^[^ ]|[0-9]+\\.[0-9]+\\*\"\n\n # Wayland (KDE)\n kscreen-doctor -j | jq -r '.outputs[] | \"\\(.name): \\(.currentMode.size.width)x\\(.currentMode.size.height)@\\(.currentMode.refreshRate)Hz\"'\n ```\n\n## Present to User\n\nSummarize the findings:\n- Number of connected displays\n- Display names/identifiers\n- Current resolution and refresh rate for each\n- Primary display\n- Display position/arrangement\n- Whether using Wayland or X11\n\n## Additional Information\n\nIf requested, also show:\n- Supported resolutions and refresh rates\n- Display manufacturer and model (from EDID)\n- Connection type (HDMI, DisplayPort, etc.)\n- Color depth and color space\n\n## Notes\n\n- Command availability depends on desktop environment and display server\n- KDE Plasma uses kscreen-doctor\n- GNOME may use different tools\n- Some information requires parsing EDID data from `/sys/class/drm/`\n", "filepath": "display/list-connected-displays.md" }, { "name": "optimize-display-scaling", "category": "display", "description": "", "tags": [], "content": "# Optimize Display Scaling\n\nYou are helping the user configure HiDPI scaling and fractional scaling for optimal display clarity.\n\n## Task\n\n1. **Check current display information:**\n ```bash\n # Display server type\n echo $XDG_SESSION_TYPE\n\n # Current resolution and size\n kscreen-doctor -o # Wayland/KDE\n # OR\n xrandr --query # X11\n\n # Calculate DPI\n xdpyinfo | grep -B 2 resolution # X11\n ```\n\n2. **Determine optimal scaling:**\n - Display resolution\n - Physical size (diagonal in inches)\n - Calculate PPI: `sqrt(width\u00b2 + height\u00b2) / diagonal_inches`\n - Recommended scaling:\n - PPI < 110: 100% (1x)\n - PPI 110-140: 125% (1.25x)\n - PPI 140-180: 150% (1.5x)\n - PPI 180-220: 200% (2x)\n - PPI > 220: 250% or higher\n\n3. **For KDE Plasma (Wayland or X11):**\n ```bash\n # Set global scale factor (Wayland)\n kscreen-doctor output.DP-1.scale.1.5\n\n # Or use GUI\n kcmshell6 kcm_kscreen\n ```\n\n **Via settings file:**\n ```bash\n # Edit KDE scaling\n kwriteconfig6 --file kdeglobals --group KScreen --key ScaleFactor 1.5\n kquitapp6 plasmashell && kstart plasmashell\n ```\n\n4. **For GNOME (Wayland):**\n ```bash\n # Enable fractional scaling\n gsettings set org.gnome.mutter experimental-features \"['scale-monitor-framebuffer']\"\n\n # Set scale factor (200% = 2.0)\n gsettings set org.gnome.desktop.interface scaling-factor 2\n ```\n\n5. **For X11 systems (general):**\n ```bash\n # Set Xft DPI\n echo \"Xft.dpi: 144\" >> ~/.Xresources\n xrdb -merge ~/.Xresources\n\n # Set scale factor for Qt applications\n export QT_SCALE_FACTOR=1.5\n\n # Set scale factor for GTK applications\n export GDK_SCALE=2\n export GDK_DPI_SCALE=0.5\n ```\n\n6. **Configure per-application scaling:**\n ```bash\n # For specific Qt apps\n QT_SCALE_FACTOR=1.5 application-name\n\n # Add to .bashrc or application launcher\n echo 'export QT_SCALE_FACTOR=1.5' >> ~/.bashrc\n ```\n\n7. **Adjust font DPI:**\n ```bash\n # KDE\n kwriteconfig6 --file kcmfonts --group General --key forceFontDPI 144\n\n # GNOME\n gsettings set org.gnome.desktop.interface text-scaling-factor 1.25\n ```\n\n8. **Handle cursor size:**\n ```bash\n # Set cursor size\n gsettings set org.gnome.desktop.interface cursor-size 32 # GNOME\n kwriteconfig6 --file kcminputrc --group Mouse --key cursorSize 32 # KDE\n ```\n\n## Verify Configuration\n\n1. **Check current settings:**\n ```bash\n echo $QT_SCALE_FACTOR\n echo $GDK_SCALE\n gsettings get org.gnome.desktop.interface scaling-factor # GNOME\n ```\n\n2. **Test applications:**\n - Open various applications (browser, terminal, file manager)\n - Check text clarity\n - Verify UI element sizes\n - Test both Qt and GTK applications\n\n3. **Log out and back in** to apply system-wide changes\n\n## Troubleshooting\n\n- **Blurry applications:** Some apps don't support fractional scaling on X11\n- **Inconsistent scaling:** Mix of Qt and GTK apps may scale differently\n- **Wayland vs X11:** Wayland generally handles fractional scaling better\n- **XWayland apps:** May appear blurry on Wayland with fractional scaling\n\n**Solutions:**\n```bash\n# Force XWayland scaling\nkwriteconfig6 --file kwinrc --group Xwayland --key Scale 1.5\n\n# Disable fractional scaling for specific apps\nenv QT_SCALE_FACTOR=1 application-name\n```\n\n## Notes\n\n- Wayland provides better fractional scaling support than X11\n- Some applications may require restart to apply scaling\n- Integer scaling (1x, 2x) is sharper than fractional (1.25x, 1.5x)\n- Consider display distance when choosing scale factor\n- Multi-monitor setups with different DPI may require per-monitor scaling\n", "filepath": "display/optimize-display-scaling.md" }, { "name": "setup-multi-monitor", "category": "display", "description": "", "tags": [], "content": "# Setup Multi-Monitor Configuration\n\nYou are helping the user configure multiple monitors on their Linux desktop.\n\n## Task\n\n1. **Detect current setup:**\n ```bash\n # Check display server type\n echo $XDG_SESSION_TYPE\n\n # List connected displays\n kscreen-doctor -o # KDE/Wayland\n # OR\n xrandr --query # X11\n ```\n\n2. **Ask the user about their desired configuration:**\n - Which display should be primary?\n - How should displays be arranged? (left-of, right-of, above, below, mirrored)\n - What resolution for each display?\n - What refresh rate for each display?\n\n3. **For X11 systems using xrandr:**\n ```bash\n # Example: Two monitors side by side\n xrandr --output HDMI-1 --mode 1920x1080 --rate 60 \\\n --output DP-1 --mode 2560x1440 --rate 144 --right-of HDMI-1 --primary\n\n # Example: Mirror displays\n xrandr --output HDMI-1 --mode 1920x1080 \\\n --output DP-1 --same-as HDMI-1 --mode 1920x1080\n ```\n\n4. **For Wayland/KDE systems using kscreen-doctor:**\n ```bash\n # Example: Configure displays\n kscreen-doctor output.HDMI-1.mode.1920x1080@60 \\\n output.DP-1.mode.2560x1440@144 \\\n output.DP-1.position.1920,0 \\\n output.DP-1.primary\n ```\n\n5. **For GNOME (Wayland):**\n ```bash\n # Use gnome-control-center or gnome-randr\n gnome-randr modify DP-1 --mode 2560x1440 --rate 144 --primary --pos 1920x0\n ```\n\n6. **Make configuration persistent:**\n\n **X11:**\n - Configuration is typically saved by the desktop environment\n - Can add xrandr commands to `~/.xprofile` or startup scripts\n\n **Wayland/KDE:**\n - KDE saves configuration automatically in `~/.local/share/kscreen/`\n\n **Create startup script if needed:**\n ```bash\n # Create script\n cat > ~/.config/autostart-scripts/monitor-setup.sh << 'EOF'\n #!/bin/bash\n # Monitor setup commands here\n EOF\n\n chmod +x ~/.config/autostart-scripts/monitor-setup.sh\n ```\n\n7. **Test and verify:**\n - Check if all displays are showing correctly\n - Verify resolution and refresh rates\n - Test primary display setting\n - Check display arrangement by moving windows\n\n## Troubleshooting\n\nIf issues occur:\n- Check if displays are detected: `xrandr --listmonitors` or `kscreen-doctor -o`\n- Verify supported modes for each display\n- Try lower refresh rates if displays flicker\n- Check cable connections and quality\n- Look for errors in logs: `journalctl -b | grep -i drm`\n\n## Notes\n\n- Commands differ significantly between X11 and Wayland\n- Desktop environment may provide GUI tools (System Settings)\n- Some configurations may require restarting the display manager\n- High refresh rates require appropriate cables (DisplayPort 1.4, HDMI 2.1)\n", "filepath": "display/setup-multi-monitor.md" }, { "name": "switch-display-profile", "category": "display", "description": "", "tags": [], "content": "# Switch Display Profile\n\nYou are helping the user switch between different display configurations/profiles (e.g., docked, laptop only, presentation).\n\n## Task\n\n1. **Check current display setup:**\n ```bash\n # Display server type\n echo $XDG_SESSION_TYPE\n\n # Connected displays\n kscreen-doctor -o # KDE/Wayland\n # OR\n xrandr --query # X11\n ```\n\n2. **Ask user which profile they want:**\n - **Laptop only** (internal display only)\n - **Docked** (external monitors only or primary external)\n - **Extended** (laptop + external monitors)\n - **Mirrored** (same output on all displays)\n - **Presentation** (external display as primary)\n - **Custom** (user-defined configuration)\n\n3. **KDE Plasma - Use saved profiles:**\n ```bash\n # List saved display configurations\n ls ~/.local/share/kscreen/\n\n # KDE automatically switches profiles based on connected displays\n # Force reload configuration\n kscreen-doctor --reload\n ```\n\n4. **Create/Use autorandr (X11):**\n ```bash\n # Install if not present\n sudo apt install autorandr\n\n # Save current configuration as profile\n autorandr --save profile-name\n\n # Switch to saved profile\n autorandr --load profile-name\n\n # Auto-detect and load appropriate profile\n autorandr --change\n ```\n\n5. **Manual X11 configuration examples:**\n\n **Laptop only:**\n ```bash\n xrandr --output eDP-1 --auto --primary \\\n --output HDMI-1 --off \\\n --output DP-1 --off\n ```\n\n **Docked (external monitors only):**\n ```bash\n xrandr --output eDP-1 --off \\\n --output HDMI-1 --auto --primary \\\n --output DP-1 --auto --right-of HDMI-1\n ```\n\n **Extended (laptop + external):**\n ```bash\n xrandr --output eDP-1 --auto \\\n --output HDMI-1 --auto --primary --above eDP-1\n ```\n\n **Mirrored:**\n ```bash\n xrandr --output eDP-1 --auto \\\n --output HDMI-1 --auto --same-as eDP-1\n ```\n\n6. **Wayland/KDE configuration:**\n ```bash\n # Laptop only\n kscreen-doctor output.eDP-1.enable \\\n output.HDMI-1.disable \\\n output.DP-1.disable\n\n # Docked\n kscreen-doctor output.eDP-1.disable \\\n output.HDMI-1.enable output.HDMI-1.primary \\\n output.DP-1.enable output.DP-1.position.1920,0\n\n # Extended\n kscreen-doctor output.eDP-1.enable \\\n output.HDMI-1.enable output.HDMI-1.primary \\\n output.HDMI-1.position.0,-1080\n ```\n\n7. **Create quick-switch scripts:**\n ```bash\n # Create scripts directory\n mkdir -p ~/.local/bin/display-profiles\n\n # Example: Laptop only script\n cat > ~/.local/bin/display-profiles/laptop-only.sh << 'EOF'\n #!/bin/bash\n xrandr --output eDP-1 --auto --primary \\\n --output HDMI-1 --off \\\n --output DP-1 --off\n notify-send \"Display Profile\" \"Switched to: Laptop Only\"\n EOF\n\n chmod +x ~/.local/bin/display-profiles/laptop-only.sh\n ```\n\n8. **Create keyboard shortcuts (KDE):**\n ```bash\n # Add custom shortcut\n kwriteconfig6 --file kglobalshortcutsrc \\\n --group \"Display Profile - Laptop\" \\\n --key \"_k_friendly_name\" \"Laptop Display Only\"\n ```\n\n9. **Automatic profile switching with udev:**\n ```bash\n # Create udev rule for auto-switching when docking\n sudo tee /etc/udev/rules.d/95-monitor-hotplug.rules << 'EOF'\n ACTION==\"change\", SUBSYSTEM==\"drm\", RUN+=\"/usr/local/bin/display-switch.sh\"\n EOF\n\n # Create switch script\n sudo tee /usr/local/bin/display-switch.sh << 'EOF'\n #!/bin/bash\n export DISPLAY=:0\n export XAUTHORITY=/home/daniel/.Xauthority\n /usr/bin/autorandr --change\n EOF\n\n sudo chmod +x /usr/local/bin/display-switch.sh\n sudo udevadm control --reload-rules\n ```\n\n## Verify Configuration\n\n- Check display arrangement with GUI or commands\n- Test window movement between displays\n- Verify primary display setting\n- Check resolution and refresh rates\n- Test systray/panel positioning\n\n## Troubleshooting\n\n**Profile not loading:**\n- Verify display names with `xrandr` or `kscreen-doctor -o`\n- Check saved profile files\n- Ensure all referenced displays are connected\n\n**Automatic switching not working:**\n- Check udev rule syntax\n- Verify script permissions and paths\n- Check logs: `journalctl -f` while connecting/disconnecting displays\n\n**Black screen after switching:**\n- Use Ctrl+Alt+F2 to access TTY\n- Reset display configuration: `xrandr --auto`\n- Restart display manager: `sudo systemctl restart sddm` (or gdm, lightdm)\n\n## Notes\n\n- autorandr is the most reliable tool for X11 profile management\n- KDE Plasma handles profile switching automatically on Wayland\n- Save profiles for different scenarios (home office, conference room, etc.)\n- Consider using descriptive names for profiles\n- Test profiles before relying on them for important presentations\n", "filepath": "display/switch-display-profile.md" } ], "filesystem-organization": [ { "name": "consolidate-folders", "category": "filesystem-organization", "description": "", "tags": [], "content": "You are helping Daniel consolidate redundant or overlapping folders into a more organized structure.\n\n## Your Task\n\n1. **Analyze subdirectories**: List all subdirectories in the current working directory\n2. **Identify overlaps**: Look for:\n - Folders with similar names or purposes\n - Folders containing similar types of files\n - Multiple folders that could logically be merged\n - Poorly named folders that could be better organized\n3. **Analyze contents**: For each folder, show:\n - Number of files\n - File types present\n - Apparent purpose based on names and contents\n4. **Propose consolidation plan**: Suggest:\n - Which folders should be merged\n - What the consolidated folder should be named\n - Whether any subfolders should be created within the consolidated folder\n - Any folders that should be renamed for clarity\n5. **Present the plan**: Show Daniel:\n - Current folder structure with file counts\n - Proposed new structure\n - Exactly which folders will be merged and where their contents will go\n6. **Ask for confirmation**: Get Daniel's approval before making any changes\n7. **Execute if approved**:\n - Create new folder structure\n - Move files from old folders to new locations\n - Remove empty old folders\n - Provide a summary of what was done\n\n## Guidelines\n\n- Look for semantic similarities, not just name matches\n- Consider the logical grouping of content\n- Suggest hierarchical structures where appropriate (parent/child folders)\n- Be conservative - when in doubt, ask rather than assuming\n- Preserve all files - never delete data\n- Suggest meaningful, descriptive folder names\n\n## Safety\n\n- NEVER delete files, only move them\n- If filename conflicts occur, append numeric suffixes\n- Keep original folders until Daniel confirms they can be removed\n- Ask before proceeding with moves and deletions\n- Provide a detailed log of all changes made\n", "filepath": "filesystem-organization/consolidate-folders.md" }, { "name": "flatten", "category": "filesystem-organization", "description": "", "tags": [], "content": "Flatten the folder hierarchy.\n\nAll the files in the subfolders should be moved to this level in the filesystem.\n\nResolve any filename clashes by appending unique characters.\n\nIf this directory structure contains media (images or video) then after flattening the hierarchy ask me (the user) if I would like you to run a checksum based check to identify and remove any duplicates.\n", "filepath": "filesystem-organization/flatten.md" }, { "name": "separate-by-filetype", "category": "filesystem-organization", "description": "", "tags": [], "content": "This folder contains media of different formats\nCreate subfolders for common media types - for example audio, photos, video\nMove the corresponding files into the relevant folders. E.g. mv *.mp4 -> /video\n", "filepath": "filesystem-organization/separate-by-filetype.md" }, { "name": "separate-photos-and-video", "category": "filesystem-organization", "description": "", "tags": [], "content": "This folder contains a mixture of photos and video\n\nCreate subfolders /photos and /videos\n\nMove photos into photos and videos into videos\n", "filepath": "filesystem-organization/separate-photos-and-video.md" }, { "name": "suggest-folder-structure", "category": "filesystem-organization", "description": "", "tags": [], "content": "You are helping Daniel design an optimal folder structure for a directory before organizing files.\n\n## Your Task\n\n1. **Analyze the current state**: Examine both files and folders in the current directory\n2. **Understand the content**: Look at:\n - What types of files are present\n - What the files appear to be for (projects, documents, media, etc.)\n - Any existing organizational patterns\n - The overall purpose of this directory\n3. **Design a structure**: Propose a logical folder hierarchy that would:\n - Group related items together\n - Make files easy to find\n - Scale well as more files are added\n - Follow common organizational best practices\n4. **Present options**: Offer 2-3 different organizational approaches:\n - By project/topic\n - By file type\n - By date/time period\n - By workflow/status\n - Or a hybrid approach\n5. **Explain the rationale**: For each proposed structure, explain:\n - Why this organization makes sense\n - Pros and cons\n - What types of users/workflows it suits best\n6. **Get feedback**: Ask Daniel which approach he prefers or if he wants modifications\n7. **Provide implementation guidance**: Once approved, explain how to implement it (or offer to run /organize-loose-files or /consolidate-folders)\n\n## Guidelines\n\n- Consider Daniel's existing organizational patterns in other directories\n- Suggest structures that are intuitive and maintainable\n- Don't over-complicate - deeper isn't always better\n- Consider future growth and scalability\n- Use clear, descriptive folder names\n- Suggest naming conventions if helpful\n\n## Example Structures to Consider\n\n- **Flat categorical**: Top-level folders by category (Documents, Images, Projects, etc.)\n- **Hierarchical by project**: Main folders for projects, subfolders for file types\n- **Chronological**: Organized by year/month or project timeline\n- **Workflow-based**: Folders like \"Active\", \"Archive\", \"Reference\", \"In-Progress\"\n- **Hybrid**: Combination of approaches (e.g., projects at top level, then by file type)\n\n## Output\n\nPresent folder structure proposals using tree format for clarity, showing:\n- Proposed folder names\n- Purpose of each folder\n- Example of what would go in each folder\n- Estimated number of files that would go in each folder based on current contents\n", "filepath": "filesystem-organization/suggest-folder-structure.md" } ], "fonts": [ { "name": "install-google-fonts", "category": "fonts", "description": "Install Google Fonts provided by the user", "tags": [ "fonts", "google-fonts", "typography", "installation", "project", "gitignored" ], "content": "You are helping the user install Google Fonts by name.\n\n## Process\n\n1. **Get font names from user**\n - Ask user which Google Fonts they want to install\n - Accept multiple font names\n\n2. **Choose installation method**\n\n **Method 1: Using google-font-installer (if available)**\n - Install tool: `pip install gftools`\n - Download font: `gftools download-family \"Font Name\"`\n\n **Method 2: Using font-downloader**\n - Install: `sudo apt install font-manager`\n - Or use: `pip install font-downloader`\n\n **Method 3: Manual download**\n - Download from: `https://fonts.google.com/`\n - Or use GitHub: `https://github.com/google/fonts/tree/main/`\n\n3. **Download fonts**\n - For each font name:\n - Convert name to lowercase with dashes (e.g., \"Roboto Mono\" \u2192 \"roboto-mono\")\n - Download from: `https://fonts.google.com/download?family=Font+Name`\n - Or clone specific font: `git clone https://github.com/google/fonts.git --depth 1 --filter=blob:none --sparse && cd fonts && git sparse-checkout set ofl/`\n\n4. **Install fonts**\n - Create user font directory: `mkdir -p ~/.local/share/fonts/google-fonts`\n - Extract and copy font files:\n ```bash\n unzip .zip -d ~/.local/share/fonts/google-fonts//\n ```\n - Only copy .ttf and .otf files\n\n5. **Update font cache**\n - Run: `fc-cache -fv`\n - Verify installation: `fc-list | grep -i \"\"`\n\n6. **Provide usage examples**\n - Show how to use in applications\n - Show how to set as system font\n - Show how to use in CSS/web design\n\n## Example Workflow\n\n```bash\n# Example: Installing \"Roboto\" and \"Open Sans\"\nmkdir -p ~/.local/share/fonts/google-fonts\ncd /tmp\n\n# Download Roboto\nwget \"https://fonts.google.com/download?family=Roboto\" -O roboto.zip\nunzip roboto.zip -d ~/.local/share/fonts/google-fonts/roboto/\n\n# Download Open Sans\nwget \"https://fonts.google.com/download?family=Open+Sans\" -O open-sans.zip\nunzip open-sans.zip -d ~/.local/share/fonts/google-fonts/open-sans/\n\n# Update cache\nfc-cache -fv\n\n# Verify\nfc-list | grep -i \"roboto\\|open sans\"\n```\n\n## Output\n\nProvide a summary showing:\n- Fonts requested by user\n- Download and installation status for each\n- Installation location\n- Verification that fonts are available\n- Usage examples", "filepath": "fonts/install-google-fonts.md" }, { "name": "list-fonts", "category": "fonts", "description": "List installed fonts and offer to install additional fonts", "tags": [ "fonts", "typography", "system", "customization", "project", "gitignored" ], "content": "You are helping the user review their installed fonts and install additional ones if requested.\n\n## Process\n\n1. **List currently installed fonts**\n - System fonts: `fc-list | cut -d: -f2 | sort -u | wc -l` (count)\n - Show font families: `fc-list : family | sort -u`\n - List font directories:\n - System: `/usr/share/fonts/`\n - User: `~/.local/share/fonts/`\n\n2. **Categorize installed fonts**\n - Serif fonts\n - Sans-serif fonts\n - Monospace/coding fonts\n - Display/decorative fonts\n - Icon fonts\n\n3. **Check for common font packages**\n - `dpkg -l | grep -E \"fonts-|ttf-\"`\n - Common packages:\n - `fonts-liberation`\n - `fonts-noto`\n - `fonts-roboto`\n - `ttf-mscorefonts-installer`\n - `fonts-powerline`\n\n4. **Suggest useful font additions**\n\n **For coding:**\n - Fira Code (ligatures)\n - JetBrains Mono\n - Cascadia Code\n - Victor Mono\n - Source Code Pro\n\n **For design:**\n - Inter\n - Poppins\n - Montserrat\n - Raleway\n\n **System fonts:**\n - Noto fonts (comprehensive Unicode)\n - Liberation fonts (MS Office compatible)\n\n **Icons:**\n - Font Awesome\n - Material Design Icons\n - Nerd Fonts\n\n5. **Installation methods**\n - APT: `sudo apt install fonts-`\n - Manual installation:\n ```bash\n mkdir -p ~/.local/share/fonts\n # Copy font files to directory\n fc-cache -fv\n ```\n - Google Fonts downloader (see separate command)\n\n6. **Test font installation**\n - Refresh font cache: `fc-cache -fv`\n - Verify font: `fc-list | grep -i `\n - Show sample: `fc-match `\n\n## Output\n\nProvide a report showing:\n- Total number of installed font families\n- List of installed fonts by category\n- Missing commonly-used fonts\n- Suggested fonts to install based on use case\n- Installation commands", "filepath": "fonts/list-fonts.md" } ], "hardware": [ { "name": "check-gpu-os-optimization", "category": "hardware", "description": "Evaluate if OS is properly optimized to support the GPU", "tags": [ "gpu", "amd", "rocm", "optimization", "drivers", "project", "gitignored" ], "content": "You are helping the user verify their OS is properly optimized for their GPU (AMD in Daniel's case).\n\n## Process\n\n1. **Identify GPU**\n - List GPUs: `lspci | grep -E \"VGA|3D\"`\n - Get detailed info: `lspci -v -s $(lspci | grep VGA | cut -d\" \" -f1)`\n - For AMD: `rocm-smi` or `rocminfo`\n\n2. **Check GPU drivers**\n\n **For AMD (ROCm):**\n - Check ROCm version: `rocminfo | grep \"Name:\" | head -1`\n - Check kernel module: `lsmod | grep amdgpu`\n - Check firmware: `ls /usr/lib/firmware/amdgpu/`\n - Verify driver: `glxinfo | grep \"OpenGL renderer\"`\n\n **Verify correct driver is loaded:**\n - Check Xorg/Wayland: `glxinfo | grep -E \"vendor|renderer\"`\n - Should show AMD/RADV, not llvmpipe (software rendering)\n\n3. **Check compute support**\n - ROCm installation: `rocminfo`\n - HIP runtime: `hipconfig --version`\n - Check device visibility: `rocm-smi --showproductname`\n - Verify compute capability\n\n4. **Check required packages**\n ```bash\n dpkg -l | grep -E \"rocm|amdgpu|mesa\"\n ```\n - Key packages for AMD:\n - `rocm-hip-runtime`\n - `rocm-opencl-runtime`\n - `mesa-vulkan-drivers`\n - `mesa-va-drivers` (for video acceleration)\n - `libdrm-amdgpu1`\n\n5. **Verify hardware acceleration**\n - VA-API: `vainfo` (should show AMD)\n - Vulkan: `vulkaninfo | grep deviceName`\n - OpenGL: `glxinfo | grep \"direct rendering\"`\n - OpenCL: `clinfo | grep \"Device Name\"`\n\n6. **Check performance settings**\n - GPU clock states: `cat /sys/class/drm/card*/device/pp_power_profile_mode`\n - Performance level: `cat /sys/class/drm/card*/device/power_dpm_force_performance_level`\n - Fan control: `rocm-smi --showfan`\n\n7. **System configuration**\n - Check user in video/render groups: `groups $USER`\n - Should include: `video`, `render`\n - If not: `sudo usermod -aG video,render $USER`\n\n8. **Check for optimization opportunities**\n - Latest drivers available?\n - Kernel parameters optimized?\n - Memory (BAR size) properly configured?\n - PCI-E link speed: `lspci -vv | grep -A 10 VGA | grep LnkSta`\n\n9. **Suggest improvements**\n - Update drivers if outdated\n - Install missing packages\n - Optimize kernel parameters in GRUB:\n - `amdgpu.ppfeaturemask=0xffffffff` (unlock all features)\n - `amdgpu.dpm=1` (enable dynamic power management)\n - Enable ReBAR if supported\n\n## Output\n\nProvide a report showing:\n- GPU model and details\n- Driver status (version, loaded correctly)\n- ROCm/compute support status\n- Hardware acceleration status (VA-API, Vulkan, OpenGL)\n- User group membership\n- Performance settings\n- Missing packages or configurations\n- Recommended optimizations", "filepath": "hardware/check-gpu-os-optimization.md" }, { "name": "evaluate-wake-devices", "category": "hardware", "description": "Evaluate wake devices and help remove them for better hibernation", "tags": [ "power", "hibernation", "wake-devices", "optimization", "project", "gitignored" ], "content": "You are helping the user evaluate and configure wake devices to improve hibernation/sleep behavior.\n\n## Process\n\n1. **Check current wake-enabled devices**\n - List devices that can wake system: `cat /proc/acpi/wakeup`\n - Show USB wake devices: `grep . /sys/bus/usb/devices/*/power/wakeup`\n - Check PCI wake devices: `grep . /sys/bus/pci/devices/*/power/wakeup`\n\n2. **Identify wake sources**\n - Check what woke the system last: `journalctl -b -1 -n 50 | grep -i \"wakeup\\|wake\"`\n - Review systemd sleep logs: `journalctl -u systemd-suspend -n 50`\n - Check for spurious wakeups\n\n3. **Common wake device categories**\n - Keyboard/Mouse (USB devices)\n - Network cards (Ethernet/WiFi)\n - Bluetooth adapters\n - USB hubs\n - Audio devices\n - ACPI devices (power buttons, lid switches)\n\n4. **Disable unnecessary wake devices**\n\n **Temporary (until reboot):**\n - Disable USB device: `echo disabled > /sys/bus/usb/devices//power/wakeup`\n - Disable ACPI: `echo disabled > /proc/acpi/wakeup`\n\n **Permanent (via udev rules):**\n - Create rule: `/etc/udev/rules.d/90-disable-wakeup.rules`\n - Example:\n ```\n # Disable USB wakeup for all USB devices except keyboard\n ACTION==\"add\", SUBSYSTEM==\"usb\", DRIVER==\"usb\", ATTR{power/wakeup}=\"disabled\"\n ```\n\n **Via systemd service:**\n - Create: `/etc/systemd/system/disable-usb-wakeup.service`\n - Set wake devices on boot\n\n5. **Test configuration**\n - Suspend system: `systemctl suspend`\n - Try to wake with various devices\n - Verify unwanted devices don't wake system\n\n6. **Suggest optimal configuration**\n - Typically keep enabled:\n - Power button\n - Keyboard (if wired)\n - Laptop lid switch\n - Typically disable:\n - Mice\n - USB hubs\n - Network cards (unless Wake-on-LAN needed)\n - Bluetooth\n\n7. **Create persistent configuration**\n - Offer to create udev rules\n - Offer to create systemd service\n - Provide script to restore settings on boot\n\n## Output\n\nProvide a report showing:\n- Currently wake-enabled devices\n- Devices that have caused wakeups\n- Recommended devices to disable\n- Configuration method (udev/systemd)\n- Commands to apply changes\n- How to test and verify", "filepath": "hardware/evaluate-wake-devices.md" }, { "name": "review-gpu-settings", "category": "hardware", "description": "Review GPU settings and suggest compatible monitoring tools", "tags": [ "gpu", "monitoring", "settings", "optimization", "tools", "project", "gitignored" ], "content": "You are helping the user review GPU settings and suggest appropriate monitoring tools.\n\n## Process\n\n1. **Current GPU configuration review**\n - Power management mode: `cat /sys/class/drm/card*/device/power_dpm_state`\n - Performance level: `cat /sys/class/drm/card*/device/power_dpm_force_performance_level`\n - Clock speeds:\n ```bash\n cat /sys/class/drm/card*/device/pp_dpm_sclk # GPU clock\n cat /sys/class/drm/card*/device/pp_dpm_mclk # Memory clock\n ```\n - Temperature limits: `cat /sys/class/drm/card*/device/hwmon/hwmon*/temp*_crit`\n\n2. **Power profile settings**\n - Available profiles: `cat /sys/class/drm/card*/device/pp_power_profile_mode`\n - Typical profiles:\n - BOOTUP_DEFAULT\n - 3D_FULL_SCREEN\n - POWER_SAVING\n - VIDEO\n - VR\n - COMPUTE\n\n3. **Fan control settings**\n - Fan mode: `cat /sys/class/drm/card*/device/hwmon/hwmon*/pwm*_enable`\n - Fan speed: `cat /sys/class/drm/card*/device/hwmon/hwmon*/pwm*`\n - Auto vs manual control\n\n4. **Overclocking/undervolting status**\n - Check if overclocking is enabled\n - Voltage settings: `cat /sys/class/drm/card*/device/pp_od_clk_voltage`\n - Power limit: `rocm-smi --showmaxpower`\n\n5. **Suggest monitoring tools**\n\n **CLI Tools:**\n - `rocm-smi` - AMD's official tool (already mentioned)\n - `radeontop` - Real-time AMD GPU usage\n - `nvtop` - Works with AMD GPUs too (better visualization)\n - `htop` with GPU support\n\n **GUI Tools:**\n - `radeon-profile` - Comprehensive AMD GPU control\n - `CoreCtrl` - Modern GPU/CPU control for Linux\n - `GreenWithEnvy` (GWE) - Mainly NVIDIA, but has AMD support\n - `Mission Center` - System monitor with GPU support\n - `Mangohud` - In-game overlay for monitoring\n\n **System monitoring:**\n - `conky` with GPU scripts\n - `btop` - Resource monitor with GPU\n - `glances` - With GPU plugin\n\n6. **Install and configure recommended tool**\n\n **For AMD, recommend CoreCtrl:**\n ```bash\n sudo apt install corectrl\n ```\n - Set up autostart\n - Configure polkit rules for non-root access\n\n **For CLI, recommend nvtop:**\n ```bash\n sudo apt install nvtop\n ```\n\n **For gaming overlay, recommend Mangohud:**\n ```bash\n sudo apt install mangohud\n ```\n\n7. **Configure optimal settings**\n - Suggest performance profile for user's use case:\n - Gaming: 3D_FULL_SCREEN\n - AI/ML: COMPUTE\n - Video encoding: VIDEO\n - Power saving: POWER_SAVING\n\n - Offer to create script to set preferred profile on boot\n\n8. **Create monitoring script**\n - Offer to create a simple GPU monitoring script:\n ```bash\n #!/bin/bash\n watch -n 1 'rocm-smi && echo && sensors | grep -A 3 amdgpu'\n ```\n\n## Output\n\nProvide a report showing:\n- Current GPU settings summary\n- Active power profile\n- Temperature and fan status\n- Recommended monitoring tools (CLI and GUI)\n- Installation commands for suggested tools\n- Optimal settings for user's use case\n- Script to apply recommended settings", "filepath": "hardware/review-gpu-settings.md" } ], "installation": [ { "name": "install-this", "category": "installation", "description": "", "tags": [], "content": "Install the program at this directory.\n\nIf it needs to be unzipped and unpacked, unzip, in the first instance to ~/programs and choose the most appropriate folder at that level (the folders divide programs into topics).\n", "filepath": "installation/install-this.md" } ], "kde": [ { "name": "backup-kde-settings", "category": "kde", "description": "", "tags": [], "content": "# Backup KDE Settings\n\nYou are helping the user export and backup their KDE Plasma configuration settings.\n\n## Task\n\n1. **Identify KDE configuration locations:**\n ```bash\n # Main config directory\n echo \"KDE Config: ~/.config/\"\n\n # Local data directory\n echo \"KDE Data: ~/.local/share/\"\n\n # Cache directory\n echo \"KDE Cache: ~/.cache/\"\n ```\n\n2. **List important KDE configuration files:**\n ```bash\n # Core KDE files\n ls -lh ~/.config/k* ~/.config/plasma* ~/.config/*rc 2>/dev/null | head -30\n\n # Desktop files\n ls -lh ~/.local/share/plasma* ~/.local/share/k* 2>/dev/null | head -20\n ```\n\n3. **Create backup directory:**\n ```bash\n # Create timestamped backup directory\n BACKUP_DIR=~/.kde-backups/kde-backup-$(date +%Y%m%d-%H%M%S)\n mkdir -p \"$BACKUP_DIR\"\n echo \"Backup directory: $BACKUP_DIR\"\n ```\n\n4. **Backup essential KDE configurations:**\n ```bash\n # Core Plasma configuration\n cp -r ~/.config/plasma* \"$BACKUP_DIR/\" 2>/dev/null\n cp -r ~/.config/k* \"$BACKUP_DIR/\" 2>/dev/null\n\n # Application-specific configs\n for config in kdeglobals kwinrc dolphinrc konsolerc katerc spectaclerc; do\n [ -f ~/.config/$config ] && cp ~/.config/$config \"$BACKUP_DIR/\"\n done\n\n # Desktop theme and appearance\n cp ~/.config/plasmarc \"$BACKUP_DIR/\" 2>/dev/null\n cp ~/.config/plasmashellrc \"$BACKUP_DIR/\" 2>/dev/null\n\n # Keyboard shortcuts\n cp ~/.config/kglobalshortcutsrc \"$BACKUP_DIR/\" 2>/dev/null\n cp ~/.config/khotkeysrc \"$BACKUP_DIR/\" 2>/dev/null\n ```\n\n5. **Backup KDE desktop layouts:**\n ```bash\n # Plasma layouts\n cp -r ~/.local/share/plasma \"$BACKUP_DIR/plasma-data\" 2>/dev/null\n\n # Desktop scripts\n cp -r ~/.local/share/kservices5 \"$BACKUP_DIR/kservices\" 2>/dev/null\n\n # Plasmoids and widgets\n cp -r ~/.local/share/plasmashell \"$BACKUP_DIR/plasmashell-data\" 2>/dev/null\n ```\n\n6. **Backup application data:**\n ```bash\n # Dolphin bookmarks and settings\n cp -r ~/.local/share/dolphin \"$BACKUP_DIR/dolphin-data\" 2>/dev/null\n\n # Konsole profiles\n cp -r ~/.local/share/konsole \"$BACKUP_DIR/konsole-data\" 2>/dev/null\n\n # KWallet (encrypted passwords)\n cp -r ~/.local/share/kwalletd \"$BACKUP_DIR/kwallet-data\" 2>/dev/null\n\n # Akonadi (if using KMail/contacts)\n # cp -r ~/.local/share/akonadi \"$BACKUP_DIR/akonadi-data\" 2>/dev/null\n ```\n\n7. **Backup color schemes and themes:**\n ```bash\n # Color schemes\n cp -r ~/.local/share/color-schemes \"$BACKUP_DIR/\" 2>/dev/null\n\n # Plasma themes\n cp -r ~/.local/share/plasma/desktoptheme \"$BACKUP_DIR/\" 2>/dev/null\n\n # Icon themes\n cp -r ~/.local/share/icons \"$BACKUP_DIR/\" 2>/dev/null\n\n # Window decorations\n cp -r ~/.local/share/aurorae \"$BACKUP_DIR/\" 2>/dev/null\n ```\n\n8. **Create backup manifest:**\n ```bash\n cat > \"$BACKUP_DIR/BACKUP_INFO.txt\" << EOF\n KDE Plasma Backup\n =================\n Date: $(date)\n Hostname: $(hostname)\n KDE Version: $(plasmashell --version)\n Qt Version: $(qmake --version | grep -i \"Qt version\")\n\n Backup Contents:\n ----------------\n $(find \"$BACKUP_DIR\" -type f | wc -l) files backed up\n $(du -sh \"$BACKUP_DIR\" | cut -f1) total size\n\n Configuration Files:\n $(ls -1 \"$BACKUP_DIR\"/*.rc \"$BACKUP_DIR\"/*config* 2>/dev/null | wc -l) config files\n\n Important Files:\n $(ls -lh \"$BACKUP_DIR\" | grep -E \"plasma|kwin|dolphin|konsole|kdeglobals\")\n EOF\n\n cat \"$BACKUP_DIR/BACKUP_INFO.txt\"\n ```\n\n9. **Compress backup:**\n ```bash\n # Create compressed archive\n cd ~/.kde-backups\n tar czf \"$(basename \"$BACKUP_DIR\").tar.gz\" \"$(basename \"$BACKUP_DIR\")\"\n\n # Show results\n ls -lh \"$(basename \"$BACKUP_DIR\").tar.gz\"\n echo \"Backup archived: ~/.kde-backups/$(basename \"$BACKUP_DIR\").tar.gz\"\n ```\n\n10. **Optional: Sync to cloud or external storage:**\n ```bash\n # Ask user if they want to copy backup elsewhere\n # Example: Copy to NAS\n # scp \"$(basename \"$BACKUP_DIR\").tar.gz\" user@nas:/backups/kde/\n\n # Example: Copy to external drive\n # cp \"$(basename \"$BACKUP_DIR\").tar.gz\" /mnt/external/kde-backups/\n ```\n\n## Restoration Instructions\n\nCreate a restoration guide in the backup:\n```bash\ncat > \"$BACKUP_DIR/RESTORE.md\" << 'EOF'\n# KDE Backup Restoration\n\n## To restore this backup:\n\n1. **Extract the archive:**\n ```bash\n cd ~/.kde-backups\n tar xzf kde-backup-TIMESTAMP.tar.gz\n ```\n\n2. **Close all KDE applications** (important!)\n\n3. **Restore configuration files:**\n ```bash\n cd kde-backup-TIMESTAMP\n cp -r k* plasma* *.rc ~/.config/\n ```\n\n4. **Restore data files:**\n ```bash\n cp -r plasma-data ~/.local/share/plasma\n cp -r dolphin-data ~/.local/share/dolphin\n cp -r konsole-data ~/.local/share/konsole\n ```\n\n5. **Restart Plasma:**\n ```bash\n kquitapp6 plasmashell && kstart plasmashell\n ```\n\n Or log out and back in.\n\n## Selective Restoration\n\nTo restore only specific components:\n- Shortcuts: `cp kglobalshortcutsrc ~/.config/`\n- Theme: `cp plasmarc ~/.config/`\n- Panel/Desktop: `cp -r plasma-data ~/.local/share/plasma`\n- Dolphin: `cp -r dolphin-data ~/.local/share/dolphin`\n\nEOF\n```\n\n## Minimal vs Complete Backup\n\n**Minimal backup** (most important settings):\n```bash\n# Just essential configs\ntar czf ~/.kde-backups/kde-minimal-backup.tar.gz \\\n ~/.config/kdeglobals \\\n ~/.config/kwinrc \\\n ~/.config/plasmarc \\\n ~/.config/kglobalshortcutsrc \\\n ~/.local/share/plasma\n```\n\n**Complete backup** (everything):\n```bash\n# All KDE data\ntar czf ~/.kde-backups/kde-complete-backup.tar.gz \\\n ~/.config/k* \\\n ~/.config/plasma* \\\n ~/.local/share/plasma* \\\n ~/.local/share/k* \\\n --exclude=\"*.lock\" \\\n --exclude=\"*cache*\"\n```\n\n## Scheduled Backups\n\nCreate a cron job for automatic backups:\n```bash\n# Add to crontab\n(crontab -l 2>/dev/null; echo \"0 2 * * 0 $HOME/.local/bin/backup-kde.sh\") | crontab -\n\n# Create backup script\ncat > ~/.local/bin/backup-kde.sh << 'EOF'\n#!/bin/bash\nBACKUP_DIR=~/.kde-backups/kde-backup-$(date +%Y%m%d)\nmkdir -p \"$BACKUP_DIR\"\ncp -r ~/.config/{k*,plasma*} \"$BACKUP_DIR/\" 2>/dev/null\ncp -r ~/.local/share/plasma \"$BACKUP_DIR/\" 2>/dev/null\ntar czf \"$BACKUP_DIR.tar.gz\" \"$BACKUP_DIR\"\nrm -rf \"$BACKUP_DIR\"\n# Keep only last 4 backups\ncd ~/.kde-backups && ls -t | tail -n +5 | xargs rm -f\nEOF\n\nchmod +x ~/.local/bin/backup-kde.sh\n```\n\n## Notes\n\n- KDE 5 uses `kf5` directory structure, KDE 6 uses `kf6`\n- Plasma 6 may have different config locations\n- Don't backup cache files (*.lock, cache directories)\n- KWallet passwords require the same user password to decrypt\n- Some configs are machine-specific (display settings)\n- Consider using Konsave (KDE settings manager) for easy backups\n- Large data folders (Akonadi, Baloo index) can be skipped for config backups\n", "filepath": "kde/backup-kde-settings.md" }, { "name": "list-kde-shortcuts", "category": "kde", "description": "", "tags": [], "content": "# List KDE Shortcuts\n\nYou are helping the user view all configured keyboard shortcuts in KDE Plasma.\n\n## Task\n\n1. **Display global shortcuts:**\n ```bash\n # Read global shortcuts config\n cat ~/.config/kglobalshortcutsrc\n ```\n\n2. **Parse and format shortcuts nicely:**\n ```bash\n # Extract and format shortcuts\n grep -E \"^[A-Za-z].*=\" ~/.config/kglobalshortcutsrc | \\\n grep -v \"^_\" | \\\n sed 's/\\\\t/\\t/g' | \\\n column -t -s= | \\\n head -50\n ```\n\n3. **List shortcuts by category:**\n\n **KWin (Window Manager) shortcuts:**\n ```bash\n grep -A 200 \"^\\[kwin\\]\" ~/.config/kglobalshortcutsrc | \\\n grep -v \"^_\" | \\\n grep \"=\" | \\\n sed 's/,.*$//' | \\\n column -t -s=\n ```\n\n **Plasma Desktop shortcuts:**\n ```bash\n grep -A 100 \"^\\[plasmashell\\]\" ~/.config/kglobalshortcutsrc | \\\n grep -v \"^_\" | \\\n grep \"=\" | \\\n sed 's/,.*$//' | \\\n column -t -s=\n ```\n\n **Application shortcuts:**\n ```bash\n grep -A 50 \"^\\[org.kde.dolphin\\]\" ~/.config/kglobalshortcutsrc | \\\n grep \"=\" | \\\n column -t -s=\n ```\n\n4. **Find specific shortcut by key combination:**\n ```bash\n # Search for Meta (Super) key shortcuts\n grep -i \"meta\" ~/.config/kglobalshortcutsrc | grep -v \"^_\" | head -20\n\n # Search for Ctrl+Alt shortcuts\n grep -i \"ctrl.*alt\" ~/.config/kglobalshortcutsrc | grep -v \"^_\" | head -20\n\n # Search for F-key shortcuts\n grep -E \"F[0-9]+\" ~/.config/kglobalshortcutsrc | grep -v \"^_\" | head -20\n ```\n\n5. **Create formatted shortcut reference:**\n ```bash\n cat > /tmp/kde-shortcuts.txt << 'EOF'\n KDE Plasma Keyboard Shortcuts\n ==============================\n Generated: $(date)\n\n === WINDOW MANAGEMENT (KWin) ===\n EOF\n\n grep -A 200 \"^\\[kwin\\]\" ~/.config/kglobalshortcutsrc | \\\n grep -v \"^_\" | \\\n grep \"=\" | \\\n sed 's/=\\(.*\\),.*/\\t\\1/' | \\\n sed 's/\\\\t/\\t\u2192\\t/' >> /tmp/kde-shortcuts.txt\n\n echo -e \"\\n\\n=== PLASMA DESKTOP ===\" >> /tmp/kde-shortcuts.txt\n grep -A 100 \"^\\[plasmashell\\]\" ~/.config/kglobalshortcutsrc | \\\n grep -v \"^_\" | \\\n grep \"=\" | \\\n sed 's/=\\(.*\\),.*/\\t\\1/' | \\\n sed 's/\\\\t/\\t\u2192\\t/' >> /tmp/kde-shortcuts.txt\n\n cat /tmp/kde-shortcuts.txt\n ```\n\n6. **Show default vs custom shortcuts:**\n ```bash\n # Compare with default config\n diff <(grep \"=\" /usr/share/kconf_update/kglobalshortcutsrc 2>/dev/null | sort) \\\n <(grep \"=\" ~/.config/kglobalshortcutsrc | sort) | \\\n grep \"^>\" | \\\n head -20\n ```\n\n7. **List application-specific shortcuts:**\n\n **Dolphin:**\n ```bash\n kreadconfig6 --file ~/.config/dolphinrc --group \"Shortcuts\"\n ```\n\n **Konsole:**\n ```bash\n kreadconfig6 --file ~/.config/konsolerc --group \"Shortcuts\"\n ```\n\n **Kate:**\n ```bash\n grep -A 100 \"^\\[Shortcuts\\]\" ~/.config/katerc 2>/dev/null\n ```\n\n8. **Export shortcuts to easily readable format:**\n ```bash\n # Create markdown format\n cat > /tmp/kde-shortcuts.md << 'EOF'\n # KDE Plasma Keyboard Shortcuts\n\n ## Window Management\n\n | Action | Shortcut |\n |--------|----------|\n EOF\n\n grep -A 200 \"^\\[kwin\\]\" ~/.config/kglobalshortcutsrc | \\\n grep -v \"^_\" | \\\n grep \"=\" | \\\n sed 's/=\\([^,]*\\),.*/|\\1|/' | \\\n sed 's/\\\\t/|/' | \\\n sed 's/^/|/' >> /tmp/kde-shortcuts.md\n\n cat /tmp/kde-shortcuts.md\n ```\n\n9. **Check for shortcut conflicts:**\n ```bash\n # Find duplicate key bindings\n grep \"=\" ~/.config/kglobalshortcutsrc | \\\n sed 's/.*=\\([^,]*\\),.*/\\1/' | \\\n grep -v \"^$\\|none\" | \\\n sort | \\\n uniq -d\n ```\n\n10. **Interactive shortcut search:**\n ```bash\n # Function to search shortcuts\n cat > /tmp/search-shortcuts.sh << 'EOF'\n #!/bin/bash\n\n if [ -z \"$1\" ]; then\n echo \"Usage: $0 \"\n echo \"Example: $0 desktop\"\n echo \"Example: $0 Meta+D\"\n exit 1\n fi\n\n echo \"Searching for: $1\"\n echo \"================================\"\n\n grep -i \"$1\" ~/.config/kglobalshortcutsrc | \\\n grep \"=\" | \\\n grep -v \"^_\" | \\\n sed 's/=\\([^,]*\\),.*/\\t\u2192\\t\\1/' | \\\n sed 's/\\\\t/\\t/'\n EOF\n\n chmod +x /tmp/search-shortcuts.sh\n\n # Example usage\n /tmp/search-shortcuts.sh \"window\"\n ```\n\n## Common Shortcuts Reference\n\nCreate a quick reference of most-used shortcuts:\n```bash\ncat > /tmp/kde-common-shortcuts.md << 'EOF'\n# KDE Plasma - Common Default Shortcuts\n\n## Window Management\n- `Meta + Arrow` - Snap window to edge\n- `Meta + PageUp/PageDown` - Window to desktop above/below\n- `Alt + F2` - Run command (KRunner)\n- `Alt + F3` - Window menu\n- `Alt + F4` - Close window\n- `Meta + Tab` - Switch windows\n\n## Desktop\n- `Meta + D` - Show desktop\n- `Ctrl + F1-F12` - Switch virtual desktops\n- `Meta + PgUp/PgDn` - Move to desktop above/below\n\n## Plasma\n- `Alt + Space` - KRunner\n- `Meta` - Application Launcher\n- `Ctrl + Esc` - System Activity (task manager)\n\n## Screenshots\n- `Print` - Full screen screenshot\n- `Meta + Print` - Window screenshot\n- `Meta + Shift + Print` - Region screenshot\n\n## Applications\n- `Meta + E` - Dolphin (file manager)\n- `Ctrl + Alt + T` - Terminal (if configured)\nEOF\n\ncat /tmp/kde-common-shortcuts.md\n```\n\n## GUI Shortcut Management\n\nIf user prefers GUI:\n```bash\n# Open System Settings to Shortcuts\nkcmshell6 keys\n# OR\nsystemsettings kcm_keys\n```\n\n## Export for Documentation\n\n```bash\n# Create detailed export\ncat > ~/kde-shortcuts-export-$(date +%Y%m%d).txt << EOF\nKDE Plasma Keyboard Shortcuts Export\n=====================================\nDate: $(date)\nUser: $USER\nHostname: $(hostname)\nPlasma Version: $(plasmashell --version)\n\nEOF\n\n# Add all shortcuts by category\nfor section in kwin plasmashell org_kde_powerdevil khotkeys; do\n echo -e \"\\n=== $section ===\\n\" >> ~/kde-shortcuts-export-$(date +%Y%m%d).txt\n grep -A 500 \"^\\[$section\\]\" ~/.config/kglobalshortcutsrc | \\\n grep \"=\" | \\\n grep -v \"^_\" | \\\n head -100 >> ~/kde-shortcuts-export-$(date +%Y%m%d).txt\ndone\n\necho \"Exported to: ~/kde-shortcuts-export-$(date +%Y%m%d).txt\"\n```\n\n## Find Unassigned Actions\n\n```bash\n# Actions that have no shortcut assigned\ngrep \"=none\" ~/.config/kglobalshortcutsrc | \\\ngrep -v \"^_\" | \\\nsed 's/=.*//' | \\\nhead -30\n```\n\n## Troubleshooting\n\n**Shortcuts not working:**\n```bash\n# Check if shortcuts are enabled\nkreadconfig6 --file kglobalshortcutsrc --group \"KDE Keyboard Layout Switcher\" --key \"_k_friendly_name\"\n\n# Restart KDE shortcuts daemon\nkquitapp6 kglobalaccel\nkglobalaccel &\n```\n\n**Reset shortcuts to defaults:**\n```bash\n# Backup current\ncp ~/.config/kglobalshortcutsrc ~/.config/kglobalshortcutsrc.backup\n\n# Remove custom shortcuts\nrm ~/.config/kglobalshortcutsrc\n\n# Restart Plasma\nkquitapp6 plasmashell && kstart plasmashell\n```\n\n## Notes\n\n- Shortcuts are stored in `~/.config/kglobalshortcutsrc`\n- Format: `action=key,friendly_name,component`\n- `Meta` key = Super/Windows key\n- Some apps store shortcuts in their own config files\n- Use System Settings GUI for easier shortcut management\n- Custom shortcuts can be added via khotkeys\n- Conflicts are usually handled automatically by KDE\n", "filepath": "kde/list-kde-shortcuts.md" }, { "name": "optimize-kde-performance", "category": "kde", "description": "", "tags": [], "content": "# Optimize KDE Performance\n\nYou are helping the user tune KDE Plasma settings for better performance and responsiveness.\n\n## Task\n\n1. **Check current performance baseline:**\n ```bash\n # CPU/RAM usage of KDE processes\n ps aux | grep -E \"plasma|kwin\" | grep -v grep\n\n # Memory usage\n free -h\n\n # KWin resource usage\n top -b -n 1 | grep -E \"plasma|kwin\"\n ```\n\n2. **Disable unnecessary visual effects:**\n ```bash\n # Reduce KWin effects\n kwriteconfig6 --file kwinrc --group Plugins --key blurEnabled false\n kwriteconfig6 --file kwinrc --group Plugins --key contrastEnabled false\n kwriteconfig6 --file kwinrc --group Plugins --key slidebackEnabled false\n kwriteconfig6 --file kwinrc --group Plugins --key zoomEnabled false\n\n # Disable desktop effects for slower systems\n kwriteconfig6 --file kwinrc --group Compositing --key Enabled false\n\n # Or keep compositing but reduce effects\n kwriteconfig6 --file kwinrc --group Compositing --key AnimationSpeed 3\n\n # Restart KWin to apply\n qdbus org.kde.KWin /KWin reconfigure\n ```\n\n3. **Optimize compositor settings:**\n ```bash\n # Use OpenGL 3.1 (faster than 2.0, more compatible than 3.1 Core)\n kwriteconfig6 --file kwinrc --group Compositing --key GLCore false\n kwriteconfig6 --file kwinrc --group Compositing --key GLPlatformInterface egl\n\n # Set rendering backend (EGL is usually faster)\n kwriteconfig6 --file kwinrc --group Compositing --key Backend OpenGL\n\n # Disable VSync for lower latency (may cause tearing)\n # kwriteconfig6 --file kwinrc --group Compositing --key GLPreferBufferSwap n\n\n # Or use adaptive VSync\n kwriteconfig6 --file kwinrc --group Compositing --key GLPreferBufferSwap a\n\n # Reduce latency\n kwriteconfig6 --file kwinrc --group Compositing --key LatencyControl false\n\n qdbus org.kde.KWin /KWin reconfigure\n ```\n\n4. **Disable Baloo file indexing (if not needed):**\n ```bash\n # Disable Baloo\n balooctl disable\n\n # Stop Baloo service\n balooctl stop\n\n # Check status\n balooctl status\n\n # Or configure to index only specific folders\n balooctl config add /home/daniel/Documents\n balooctl enable\n ```\n\n5. **Reduce desktop search scope:**\n ```bash\n # Configure Baloo to exclude large directories\n kwriteconfig6 --file baloofilerc --group \"General\" --key \"exclude filters\" \"*.tmp,*.o,*.pyc\"\n kwriteconfig6 --file baloofilerc --group \"General\" --key \"folders[$e]\" \"$HOME/Downloads/,$HOME/.cache/,$HOME/.local/share/Trash/\"\n\n balooctl restart\n ```\n\n6. **Optimize Plasma widget performance:**\n ```bash\n # Disable weather widget auto-update\n kwriteconfig6 --file plasma-org.kde.plasma.desktop-appletsrc --group \"Containments\" --group \"1\" --group \"Applets\" --group \"org.kde.plasma.weather\" --key \"updateInterval\" 3600\n\n # Reduce system monitor update frequency\n # (Edit via GUI: Right-click widget -> Configure)\n ```\n\n7. **Reduce animation speed or disable:**\n ```bash\n # Faster animations\n kwriteconfig6 --file kdeglobals --group KDE --key AnimationDurationFactor 0.5\n\n # Disable animations entirely\n kwriteconfig6 --file kdeglobals --group KDE --key AnimationDurationFactor 0\n\n # Apply changes\n kquitapp6 plasmashell && kstart plasmashell\n ```\n\n8. **Optimize KWin window management:**\n ```bash\n # Disable window focus effects\n kwriteconfig6 --file kwinrc --group Plugins --key diminactiveEnabled false\n kwriteconfig6 --file kwinrc --group Plugins --key dimscreenEnabled false\n\n # Faster window switching\n kwriteconfig6 --file kwinrc --group TabBox --key DelayTime 0\n\n # Instant window placement\n kwriteconfig6 --file kwinrc --group Windows --key Placement Smart\n\n qdbus org.kde.KWin /KWin reconfigure\n ```\n\n9. **Disable unnecessary Plasma features:**\n ```bash\n # Disable desktop thumbnails\n kwriteconfig6 --file kwinrc --group Plugins --key thumbnailasideEnabled false\n\n # Disable desktop grid effect\n kwriteconfig6 --file kwinrc --group Effect-DesktopGrid --key ShowAddRemove false\n\n # Disable magic lamp effect\n kwriteconfig6 --file kwinrc --group Plugins --key magiclampEnabled false\n\n qdbus org.kde.KWin /KWin reconfigure\n ```\n\n10. **Configure KWin for better performance:**\n ```bash\n # Unredirect fullscreen windows (better gaming performance)\n kwriteconfig6 --file kwinrc --group Compositing --key UnredirectFullscreen true\n\n # Allow tearing for low latency\n kwriteconfig6 --file kwinrc --group Compositing --key AllowTearing true\n\n # Refresh rate (match your monitor)\n kwriteconfig6 --file kwinrc --group Compositing --key RefreshRate 0 # Auto-detect\n\n qdbus org.kde.KWin /KWin reconfigure\n ```\n\n11. **Optimize system tray:**\n ```bash\n # Remove unnecessary system tray icons\n # (Done via GUI: Right-click system tray -> Configure System Tray -> Entries)\n\n # Check what's running in system tray\n qdbus | grep \"org.kde.StatusNotifierItem\"\n ```\n\n12. **Reduce memory usage:**\n ```bash\n # Clear Plasma cache\n rm -rf ~/.cache/plasma*\n rm -rf ~/.cache/kwin*\n\n # Disable clipboard history\n kwriteconfig6 --file klipperrc --group General --key KeepClipboardContents false\n\n # Reduce clipboard history size\n kwriteconfig6 --file klipperrc --group General --key MaxClipItems 5\n ```\n\n13. **Disable KDE Connect if not needed:**\n ```bash\n # Stop KDE Connect\n kdeconnect-cli --refresh\n systemctl --user stop kdeconnect\n systemctl --user disable kdeconnect\n ```\n\n14. **Optimize font rendering:**\n ```bash\n # Disable font anti-aliasing for speed (not recommended for readability)\n # kwriteconfig6 --file kcmfonts --group General --key forceFontDPI 96\n\n # Use faster font rendering\n kwriteconfig6 --file kcmfonts --group General --key XftAntialias true\n kwriteconfig6 --file kcmfonts --group General --key XftHintStyle hintslight\n ```\n\n15. **Monitor performance improvements:**\n ```bash\n # Before and after comparison\n echo \"=== Plasma Performance ===\"\n ps aux | grep plasmashell | grep -v grep | awk '{print \"CPU: \"$3\"% RAM: \"$4\"%\"}'\n\n echo \"=== KWin Performance ===\"\n ps aux | grep kwin | grep -v grep | awk '{print \"CPU: \"$3\"% RAM: \"$4\"%\"}'\n\n echo \"=== Total KDE Memory Usage ===\"\n ps aux | grep -E \"plasma|kwin|kde\" | awk '{sum+=$6} END {print sum/1024 \" MB\"}'\n ```\n\n## Performance Testing\n\nCreate benchmark script:\n```bash\ncat > /tmp/kde-performance-test.sh << 'EOF'\n#!/bin/bash\n\necho \"KDE Performance Test\"\necho \"====================\"\necho \"\"\n\n# Test 1: Plasma shell responsiveness\necho \"Test 1: Measuring Plasma restart time...\"\nstart=$(date +%s%N)\nkquitapp6 plasmashell && kstart plasmashell\nsleep 3\nend=$(date +%s%N)\necho \"Plasma restart: $((($end-$start)/1000000)) ms\"\n\n# Test 2: KWin reconfigure time\necho \"Test 2: Measuring KWin reconfigure time...\"\nstart=$(date +%s%N)\nqdbus org.kde.KWin /KWin reconfigure\nend=$(date +%s%N)\necho \"KWin reconfigure: $((($end-$start)/1000000)) ms\"\n\n# Test 3: Resource usage\necho \"Test 3: Current resource usage...\"\nps aux | grep -E \"plasma|kwin\" | grep -v grep | awk '{print $11 \": CPU=\"$3\"% MEM=\"$4\"%\"}'\n\nEOF\n\nchmod +x /tmp/kde-performance-test.sh\n/tmp/kde-performance-test.sh\n```\n\n## Revert to Defaults\n\nIf optimizations cause issues:\n```bash\n# Backup then remove KWin config\nmv ~/.config/kwinrc ~/.config/kwinrc.optimized\nkquitapp6 kwin_wayland && kstart kwin_wayland\n\n# Reset Plasma config\nmv ~/.config/plasmarc ~/.config/plasmarc.optimized\nkquitapp6 plasmashell && kstart plasmashell\n\n# Reset KDE globals\nmv ~/.config/kdeglobals ~/.config/kdeglobals.optimized\n```\n\n## Hardware-Specific Optimizations\n\n**For AMD GPUs:**\n```bash\n# Use AMDGPU backend\nkwriteconfig6 --file kwinrc --group Compositing --key GLPlatformInterface egl\n\n# Enable TearFree if tearing occurs\n# (Set in xorg.conf or kernel parameters)\n```\n\n**For older/slower systems:**\n```bash\n# Minimal effects\nkwriteconfig6 --file kwinrc --group Plugins --key kwin4_effect_translucyEnabled false\nkwriteconfig6 --file kwinrc --group Plugins --key kwin4_effect_fadeEnabled false\n\n# Disable compositing entirely\nkwriteconfig6 --file kwinrc --group Compositing --key Enabled false\n```\n\n**For high-end systems:**\n```bash\n# Enable all effects\nkwriteconfig6 --file kwinrc --group Compositing --key AnimationSpeed 1\nkwriteconfig6 --file kwinrc --group Plugins --key blurEnabled true\n```\n\n## Notes\n\n- Test changes one at a time to identify what helps\n- Some changes require logging out/in to fully apply\n- Disabling compositing may cause tearing and disable effects\n- Baloo indexing can be heavy on CPU/disk during initial index\n- Keep compositor enabled for VRR/FreeSync support\n- Monitor GPU usage with `radeontop` or `nvidia-smi`\n- Check KWin compositor info: `qdbus org.kde.KWin /KWin supportInformation`\n", "filepath": "kde/optimize-kde-performance.md" }, { "name": "reset-plasma-config", "category": "kde", "description": "", "tags": [], "content": "# Reset Plasma Configuration\n\nYou are helping the user reset corrupted or problematic KDE Plasma settings back to defaults.\n\n## Task\n\n**WARNING:** This will reset KDE customizations. Back up first if you want to preserve any settings.\n\n1. **Ask user what to reset:**\n - Full Plasma reset (panels, desktop, all settings)\n - Plasma desktop and panels only\n - Specific application (Dolphin, Konsole, etc.)\n - Window manager (KWin) only\n - Shortcuts only\n\n2. **Backup current configuration (recommended):**\n ```bash\n # Create backup before reset\n BACKUP_DIR=~/.kde-backups/pre-reset-$(date +%Y%m%d-%H%M%S)\n mkdir -p \"$BACKUP_DIR\"\n cp -r ~/.config/plasma* ~/.config/k* \"$BACKUP_DIR/\" 2>/dev/null\n echo \"Backup created: $BACKUP_DIR\"\n ```\n\n3. **Full Plasma Reset:**\n ```bash\n # Stop Plasma\n kquitapp6 plasmashell\n\n # Remove Plasma configuration\n rm -rf ~/.config/plasma*\n rm ~/.config/plasmarc\n rm ~/.config/plasmashellrc\n\n # Remove desktop and panel configs\n rm -rf ~/.local/share/plasma\n rm -rf ~/.local/share/plasmashell\n\n # Optional: Reset KDE globals\n rm ~/.config/kdeglobals\n\n # Restart Plasma\n kstart plasmashell\n ```\n\n4. **Reset Panels and Desktop Only:**\n ```bash\n # Stop Plasma\n kquitapp6 plasmashell\n\n # Remove panel and desktop layouts\n rm -rf ~/.config/plasma-org.kde.plasma.desktop-appletsrc\n rm -rf ~/.local/share/plasma/plasmoids\n rm -rf ~/.local/share/plasma/layout-templates\n\n # Restart Plasma\n kstart plasmashell\n ```\n\n5. **Reset Window Manager (KWin):**\n ```bash\n # Stop KWin (will restart automatically)\n kwin_x11 --replace & # For X11\n # OR\n kwin_wayland --replace & # For Wayland\n\n # Or reset KWin config\n mv ~/.config/kwinrc ~/.config/kwinrc.backup\n kquitapp6 kwin_wayland && kstart kwin_wayland\n ```\n\n6. **Reset Keyboard Shortcuts:**\n ```bash\n # Backup then remove shortcuts\n cp ~/.config/kglobalshortcutsrc ~/.config/kglobalshortcutsrc.backup\n rm ~/.config/kglobalshortcutsrc\n\n # Restart to apply\n kquitapp6 plasmashell && kstart plasmashell\n ```\n\n7. **Reset Specific Applications:**\n\n **Dolphin:**\n ```bash\n rm ~/.config/dolphinrc\n rm -rf ~/.local/share/dolphin\n ```\n\n **Konsole:**\n ```bash\n rm ~/.config/konsolerc\n rm -rf ~/.local/share/konsole # Removes custom profiles\n ```\n\n **Kate:**\n ```bash\n rm ~/.config/katerc\n rm ~/.config/kateschemarc\n rm -rf ~/.local/share/kate\n ```\n\n **Spectacle (screenshots):**\n ```bash\n rm ~/.config/spectaclerc\n ```\n\n **System Settings:**\n ```bash\n rm ~/.config/systemsettingsrc\n ```\n\n8. **Reset Theme and Appearance:**\n ```bash\n # Remove theme configs\n rm ~/.config/plasmarc\n rm ~/.config/kcmfonts\n rm ~/.config/kcminputrc\n\n # Remove custom color schemes\n rm -rf ~/.local/share/color-schemes\n\n # Reset to default theme\n kwriteconfig6 --file plasmarc --group Theme --key name breeze\n ```\n\n9. **Clear Plasma Cache:**\n ```bash\n # Remove cached data\n rm -rf ~/.cache/plasma*\n rm -rf ~/.cache/kwin\n rm -rf ~/.cache/icon-cache.kcache\n\n # Rebuild icon cache\n kbuildsycoca6 --noincremental\n ```\n\n10. **Nuclear Option - Complete KDE Reset:**\n ```bash\n # ONLY if really needed - this resets EVERYTHING\n kquitapp6 plasmashell\n\n # Move all KDE configs (preserves them for recovery)\n mkdir -p ~/kde-config-backup-$(date +%Y%m%d)\n mv ~/.config/k* ~/kde-config-backup-$(date +%Y%m%d)/ 2>/dev/null\n mv ~/.config/plasma* ~/kde-config-backup-$(date +%Y%m%d)/ 2>/dev/null\n mv ~/.local/share/k* ~/kde-config-backup-$(date +%Y%m%d)/ 2>/dev/null\n mv ~/.local/share/plasma* ~/kde-config-backup-$(date +%Y%m%d)/ 2>/dev/null\n\n # Log out and back in to regenerate all configs\n qdbus org.kde.ksmserver /KSMServer logout 0 0 0\n ```\n\n## Verification Steps\n\nAfter reset:\n1. Check if Plasma is running: `pgrep plasmashell`\n2. Verify panels appeared: Look at screen\n3. Check System Settings opens: `systemsettings`\n4. Test application launches\n5. Check for error logs: `journalctl --user -xe | grep -i plasma`\n\n## Common Issues & Solutions\n\n**Plasma doesn't restart:**\n```bash\n# Force start\nplasmashell &\n\n# Or from TTY (Ctrl+Alt+F2)\nexport DISPLAY=:0\nplasmashell &\n```\n\n**Black screen after reset:**\n```bash\n# Check if running\npgrep plasmashell || plasmashell &\n\n# Restart display manager\nsudo systemctl restart sddm\n```\n\n**Settings not actually reset:**\n```bash\n# Make sure Plasma was stopped first\nkillall plasmashell\nsleep 2\nrm ~/.config/plasmarc\nplasmashell &\n```\n\n**Want to undo reset:**\n```bash\n# Restore from backup\nkquitapp6 plasmashell\ncp -r $BACKUP_DIR/* ~/.config/\nkstart plasmashell\n```\n\n## Selective Config Removal\n\nRemove only problematic configs:\n```bash\n# List all KDE configs with sizes\nls -lhS ~/.config/k* ~/.config/plasma* 2>/dev/null\n\n# Check modification dates to find recently changed\nls -lt ~/.config/k* ~/.config/plasma* 2>/dev/null | head -20\n\n# Move suspect config instead of deleting\nmv ~/.config/problematic-file ~/.config/problematic-file.old\n```\n\n## When to Use Each Reset\n\n- **Panel disappeared:** Reset panels only\n- **Widgets broken:** Clear Plasma cache + restart\n- **Shortcuts not working:** Reset kglobalshortcutsrc\n- **Window effects glitching:** Reset KWin config\n- **Dolphin crashes:** Reset Dolphin config only\n- **Everything broken:** Full Plasma reset\n- **Fresh start needed:** Nuclear option\n\n## Recovery Tools\n\n```bash\n# View current Plasma errors\njournalctl --user -xe | grep -iE \"plasma|kwin\"\n\n# Check config file syntax\nkreadconfig6 --file plasmarc --group Theme --key name\n\n# Rebuild KDE config cache\nkbuildsycoca6\n\n# Check for corrupt databases\nrm ~/.local/share/kactivitymanagerd/resources/database*\n```\n\n## Notes\n\n- Always backup before resetting\n- Some settings are in `~/.local/share/` not `~/.config/`\n- Plasma 6 uses different file locations than Plasma 5\n- Window rules are stored in `~/.config/kwinrulesrc`\n- Desktop effects settings in `~/.config/kwinrc`\n- After major resets, you may need to log out/in instead of just restarting Plasma\n- Custom installed widgets/plasmoids may need to be reinstalled\n", "filepath": "kde/reset-plasma-config.md" } ], "logging": [ { "name": "analyze-journal-errors", "category": "logging", "description": "", "tags": [], "content": "# Analyze Journal Errors\n\nYou are helping the user parse systemd journal logs to identify recent errors and issues.\n\n## Task\n\n1. **Check recent errors from current boot:**\n ```bash\n # Errors from current boot\n journalctl -b -p err\n\n # Errors and warnings\n journalctl -b -p warning\n\n # Critical and alert level messages\n journalctl -b -p crit\n ```\n\n2. **Show errors from specific time periods:**\n ```bash\n # Last hour\n journalctl --since \"1 hour ago\" -p err\n\n # Last 24 hours\n journalctl --since \"24 hours ago\" -p err\n\n # Specific date range\n journalctl --since \"2025-10-25\" --until \"2025-10-26\" -p err\n\n # Last 100 error entries\n journalctl -p err -n 100\n ```\n\n3. **Group errors by service/unit:**\n ```bash\n # List units with failures\n systemctl --failed\n\n # Errors from specific service\n journalctl -u SERVICE_NAME -p err\n\n # Common problematic services\n journalctl -u NetworkManager -p err\n journalctl -u systemd-resolved -p err\n journalctl -u bluetooth -p err\n ```\n\n4. **Analyze error frequency:**\n ```bash\n # Count errors by message\n journalctl -b -p err --no-pager | grep -oP '(?<=: ).*' | sort | uniq -c | sort -rn | head -20\n\n # Errors per unit\n journalctl -b -p err --no-pager | grep -oP '\\w+\\.service' | sort | uniq -c | sort -rn\n ```\n\n5. **Check for kernel errors:**\n ```bash\n # Kernel errors\n journalctl -k -p err\n\n # Segfaults\n journalctl | grep -i \"segfault\"\n\n # OOM killer events\n journalctl | grep -i \"killed process\"\n ```\n\n6. **Find patterns and recurring issues:**\n ```bash\n # I/O errors\n journalctl -b | grep -i \"i/o error\"\n\n # Disk errors\n journalctl -b | grep -i \"ata.*error\"\n\n # Network errors\n journalctl -b | grep -i \"network.*error\\|dhcp.*fail\"\n\n # GPU/graphics errors\n journalctl -b | grep -i \"amdgpu\\|drm.*error\"\n ```\n\n7. **Export error summary:**\n ```bash\n # Save errors to file for analysis\n journalctl -b -p err --no-pager > /tmp/system-errors-$(date +%Y%m%d).log\n\n # Create error report\n cat > /tmp/error-report.txt << EOF\n System Error Report - $(date)\n ======================================\n\n Failed Services:\n $(systemctl --failed --no-pager)\n\n Recent Errors (last 24h):\n $(journalctl --since \"24 hours ago\" -p err --no-pager | tail -50)\n\n Error Summary by Service:\n $(journalctl -b -p err --no-pager | grep -oP '\\w+\\.service' | sort | uniq -c | sort -rn)\n EOF\n\n cat /tmp/error-report.txt\n ```\n\n## Present Summary to User\n\nProvide:\n- Number of errors found in timeframe\n- Most frequent error messages\n- Services/units with errors\n- Critical vs warning vs error breakdown\n- Any patterns (disk, network, GPU issues)\n- Recommended actions for common errors\n\n## Common Error Patterns & Solutions\n\n**NetworkManager errors:**\n- DHCP timeout: Check network cable/WiFi\n- DNS resolution: Check /etc/resolv.conf\n\n**Bluetooth errors:**\n- Adapter reset: `sudo systemctl restart bluetooth`\n- Firmware missing: Check `dmesg | grep -i bluetooth`\n\n**Disk errors:**\n- I/O errors: Run SMART checks with `/check-disk-errors`\n- Filesystem errors: May need `fsck`\n\n**GPU errors:**\n- AMDGPU: Check ROCm installation and kernel modules\n- DRM errors: May indicate driver issues\n\n**systemd-resolved errors:**\n- DNSSEC validation failures: Common with some ISPs\n- Fallback DNS: Configure in `/etc/systemd/resolved.conf`\n\n## Additional Analysis\n\nIf requested:\n- Compare error frequency over different boots: `journalctl --list-boots`\n- Check correlation with specific events (updates, configuration changes)\n- Identify error spikes: `journalctl -b -p err --output=short-monotonic`\n- Export for external analysis: `journalctl -b -p err -o json`\n\n## Notes\n\n- Priority levels: 0=emerg, 1=alert, 2=crit, 3=err, 4=warning, 5=notice, 6=info, 7=debug\n- Use `--no-pager` for scripting and piping\n- Journal size can be checked with: `journalctl --disk-usage`\n- Persistent journal: stored in `/var/log/journal/`\n- Consider rotating old logs: `journalctl --vacuum-time=30d`\n", "filepath": "logging/analyze-journal-errors.md" }, { "name": "check-failed-units", "category": "logging", "description": "", "tags": [], "content": "# Check Failed Systemd Units\n\nYou are helping the user identify and diagnose failed systemd units (services, mounts, timers, etc.).\n\n## Task\n\n1. **List all failed units:**\n ```bash\n # Show failed units\n systemctl --failed\n\n # More detailed output\n systemctl --failed --all\n\n # Include user units\n systemctl --user --failed\n ```\n\n2. **Get detailed status of failed units:**\n ```bash\n # For each failed unit, get details\n for unit in $(systemctl --failed --no-legend | awk '{print $1}'); do\n echo \"=== $unit ===\"\n systemctl status \"$unit\" --no-pager -l\n echo \"\"\n done\n ```\n\n3. **Check recent failures:**\n ```bash\n # Units that failed in last boot\n systemctl list-units --failed --state=failed\n\n # Check boot log for failures\n journalctl -b -p err | grep -i \"failed\"\n ```\n\n4. **Analyze specific failed unit:**\n ```bash\n # Status with full output\n systemctl status UNIT_NAME -l --no-pager\n\n # Recent logs for the unit\n journalctl -u UNIT_NAME -n 50 --no-pager\n\n # Logs from current boot\n journalctl -b -u UNIT_NAME --no-pager\n\n # All logs for the unit\n journalctl -u UNIT_NAME --since \"24 hours ago\" --no-pager\n ```\n\n5. **Check unit dependencies:**\n ```bash\n # What this unit depends on\n systemctl list-dependencies UNIT_NAME\n\n # What depends on this unit\n systemctl list-dependencies --reverse UNIT_NAME\n\n # Check if dependencies failed\n systemctl list-dependencies UNIT_NAME --all | while read dep; do\n systemctl is-failed \"$dep\" 2>/dev/null | grep -q \"^failed\" && echo \"FAILED: $dep\"\n done\n ```\n\n6. **Common failure patterns:**\n ```bash\n # Mount failures\n systemctl --failed | grep \".mount\"\n\n # Service failures\n systemctl --failed | grep \".service\"\n\n # Timer failures\n systemctl --failed | grep \".timer\"\n\n # Network-related failures\n systemctl --failed | grep -E \"network|dhcp|dns\"\n ```\n\n7. **Attempt to diagnose failure reason:**\n ```bash\n # Exit code and signal\n systemctl show UNIT_NAME | grep -E \"ExecMainStatus|ExecMainCode|Result\"\n\n # Unit file location and settings\n systemctl cat UNIT_NAME\n\n # Check if unit file exists and is valid\n systemctl show UNIT_NAME -p LoadState,ActiveState,SubState,Result\n ```\n\n8. **Try to restart failed units:**\n ```bash\n # Ask user if they want to attempt restart\n # List failed units\n failed_units=$(systemctl --failed --no-legend | awk '{print $1}')\n\n # For each unit, ask to restart\n for unit in $failed_units; do\n echo \"Attempting to restart: $unit\"\n sudo systemctl restart \"$unit\"\n systemctl is-active --quiet \"$unit\" && echo \"\u2713 $unit restarted successfully\" || echo \"\u2717 $unit restart failed\"\n done\n ```\n\n9. **Check for masked units:**\n ```bash\n # List masked units\n systemctl list-unit-files | grep masked\n\n # Check if failed unit is masked\n systemctl is-enabled UNIT_NAME\n ```\n\n10. **Generate failure report:**\n ```bash\n cat > /tmp/failed-units-report.txt << EOF\n Failed Units Report - $(date)\n ======================================\n\n Failed Units Summary:\n $(systemctl --failed --no-pager)\n\n Detailed Status:\n EOF\n\n for unit in $(systemctl --failed --no-legend | awk '{print $1}'); do\n echo \"\" >> /tmp/failed-units-report.txt\n echo \"=== $unit ===\" >> /tmp/failed-units-report.txt\n systemctl status \"$unit\" --no-pager -l >> /tmp/failed-units-report.txt 2>&1\n echo \"\" >> /tmp/failed-units-report.txt\n echo \"Recent Logs:\" >> /tmp/failed-units-report.txt\n journalctl -u \"$unit\" -n 20 --no-pager >> /tmp/failed-units-report.txt 2>&1\n echo \"\" >> /tmp/failed-units-report.txt\n done\n\n cat /tmp/failed-units-report.txt\n ```\n\n## Present Summary to User\n\nProvide:\n- Number of failed units\n- List of failed unit names and types\n- Failure reasons (exit codes, signals)\n- Recent log entries for each\n- Recommended actions\n\n## Common Failed Units & Solutions\n\n**NetworkManager-wait-online.service:**\n- Usually safe to ignore or disable if not needed\n- `sudo systemctl disable NetworkManager-wait-online.service`\n\n**ModemManager.service:**\n- May fail if no modem hardware present\n- Can disable: `sudo systemctl disable ModemManager.service`\n\n**bluetooth.service:**\n- Check firmware: `journalctl -u bluetooth | grep -i firmware`\n- Restart: `sudo systemctl restart bluetooth`\n\n**systemd-resolved.service:**\n- Check config: `/etc/systemd/resolved.conf`\n- DNS issues: `resolvectl status`\n\n**Mount units (*.mount):**\n- Check fstab: `cat /etc/fstab`\n- Verify device exists: `lsblk`\n- Check mount point permissions\n\n**User services:**\n- Check user journal: `journalctl --user -u UNIT_NAME`\n- May need `loginctl enable-linger USER`\n\n## Cleanup Actions\n\n```bash\n# Reset failed state\nsudo systemctl reset-failed\n\n# Disable permanently failed units (ask first!)\nsudo systemctl disable UNIT_NAME\n\n# Mask unit to prevent activation\nsudo systemctl mask UNIT_NAME\n\n# Unmask unit\nsudo systemctl unmask UNIT_NAME\n\n# Reload systemd configuration\nsudo systemctl daemon-reload\n```\n\n## Notes\n\n- Not all failures are critical - some are expected\n- Check if service is actually needed before disabling\n- Some failures may be due to hardware not present (modems, bluetooth)\n- Mount failures can prevent boot - be careful with fstab changes\n- User units are separate from system units\n- Use `systemctl reset-failed` to clear failed state after fixing\n", "filepath": "logging/check-failed-units.md" }, { "name": "monitor-system-resources", "category": "logging", "description": "", "tags": [], "content": "# Monitor System Resources\n\nYou are helping the user monitor live system resource usage (CPU, RAM, disk I/O, network).\n\n## Task\n\n1. **Quick resource overview:**\n ```bash\n # System overview\n top -b -n 1 | head -20\n\n # Better overview with htop (if installed)\n htop\n\n # Modern alternative: btop\n btop\n ```\n\n2. **CPU monitoring:**\n ```bash\n # CPU usage summary\n mpstat 1 5 # 5 samples, 1 second interval\n\n # Per-core usage\n mpstat -P ALL 1 5\n\n # Top CPU consumers\n ps aux --sort=-%cpu | head -10\n\n # CPU frequency and temperature\n watch -n 1 \"grep MHz /proc/cpuinfo | head -20 && sensors\"\n ```\n\n3. **Memory monitoring:**\n ```bash\n # Memory usage\n free -h\n\n # Detailed memory info\n cat /proc/meminfo\n\n # Top memory consumers\n ps aux --sort=-%mem | head -10\n\n # Memory usage by process\n smem -tk # If smem is installed\n\n # Watch memory usage\n watch -n 1 free -h\n ```\n\n4. **Disk I/O monitoring:**\n ```bash\n # I/O statistics\n iostat -x 1 5\n\n # Disk usage by device\n iotop -o # Only show active I/O\n\n # Per-process I/O\n iotop -P\n\n # Watch disk I/O\n watch -n 1 \"iostat -x | grep -E 'Device|sd|nvme'\"\n ```\n\n5. **Network monitoring:**\n ```bash\n # Network interface statistics\n ifstat 1 5\n\n # Bandwidth per interface\n bmon\n\n # Network speed\n iftop\n\n # Connection summary\n ss -s\n\n # Active connections\n nethogs\n ```\n\n6. **Combined system monitoring:**\n ```bash\n # All-in-one monitoring\n glances\n\n # Custom dashboard\n watch -n 1 '\n echo \"=== CPU ===\"\n top -b -n 1 | head -5 | tail -2\n echo \"\"\n echo \"=== Memory ===\"\n free -h | grep -E \"Mem|Swap\"\n echo \"\"\n echo \"=== Disk I/O ===\"\n iostat -x 1 1 | grep -E \"Device|sd|nvme\" | head -5\n echo \"\"\n echo \"=== Network ===\"\n ifstat 1 1 | tail -1\n '\n ```\n\n7. **GPU monitoring (AMD):**\n ```bash\n # AMD GPU usage\n radeontop\n\n # GPU sensors\n watch -n 1 \"rocm-smi\"\n\n # Detailed GPU info\n watch -n 1 \"rocm-smi --showuse --showmemuse --showtemp\"\n ```\n\n8. **Process tree with resource usage:**\n ```bash\n # Process tree\n pstree -p\n\n # Resource-aware process tree\n ps auxf\n\n # Find resource hogs\n ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu | head -20\n ```\n\n9. **System load monitoring:**\n ```bash\n # Load average\n uptime\n\n # Load over time\n watch -n 1 \"uptime && cat /proc/loadavg\"\n\n # Who's causing load\n tload -d 1\n ```\n\n10. **Create monitoring script:**\n ```bash\n cat > /tmp/system-monitor.sh << 'EOF'\n #!/bin/bash\n echo \"System Resource Monitor - $(date)\"\n echo \"========================================\"\n echo \"\"\n echo \"CPU Usage:\"\n mpstat 1 1 | tail -1\n echo \"\"\n echo \"Memory:\"\n free -h | grep -E \"Mem|Swap\"\n echo \"\"\n echo \"Load Average:\"\n uptime\n echo \"\"\n echo \"Top 5 CPU Processes:\"\n ps aux --sort=-%cpu | head -6 | tail -5\n echo \"\"\n echo \"Top 5 Memory Processes:\"\n ps aux --sort=-%mem | head -6 | tail -5\n echo \"\"\n echo \"Disk Usage:\"\n df -h / /home\n echo \"\"\n echo \"Disk I/O:\"\n iostat -x 1 1 | grep -E \"sd|nvme\"\n EOF\n\n chmod +x /tmp/system-monitor.sh\n /tmp/system-monitor.sh\n ```\n\n## Present Summary to User\n\nProvide snapshot of:\n- **CPU:** Usage %, load average, top processes\n- **Memory:** Used/Free, swap usage, top consumers\n- **Disk:** Usage %, I/O wait, active reads/writes\n- **Network:** Bandwidth usage, active connections\n- **GPU:** Usage % (if applicable)\n\nFlag any concerning patterns:\n- High CPU usage (>80% sustained)\n- Low memory (<10% free)\n- High swap usage\n- Disk I/O bottlenecks\n- Network saturation\n\n## Install Monitoring Tools if Needed\n\n```bash\n# Install commonly needed tools\nsudo apt install -y \\\n htop \\\n btop \\\n iotop \\\n iftop \\\n nethogs \\\n glances \\\n sysstat \\\n bmon \\\n radeontop\n```\n\n## Troubleshooting High Usage\n\n**High CPU:**\n- Identify process: `top` or `htop`\n- Check if legitimate (updates, backups, encoding)\n- Kill if necessary: `kill -15 PID`\n\n**High Memory:**\n- Check for memory leaks: `ps aux --sort=-%mem`\n- Clear caches: `sudo sync && sudo sysctl -w vm.drop_caches=3`\n- Check for swap thrashing\n\n**High Disk I/O:**\n- Identify process: `iotop -o`\n- Check if expected (backups, indexing, updates)\n- Monitor disk health: `/check-disk-errors`\n\n**High Network:**\n- Identify connections: `nethogs` or `iftop`\n- Check for unexpected traffic\n- Use `ss -tunap` to see connections\n\n## Notes\n\n- Most monitoring tools require sudo for full functionality\n- `glances` provides best all-in-one view\n- `btop` is modern, colorful alternative to `htop`\n- Some tools need to be installed separately\n- For persistent monitoring, consider setting up Prometheus/Grafana\n- Check `systemd-cgtop` for cgroup resource usage\n", "filepath": "logging/monitor-system-resources.md" }, { "name": "tail-system-logs", "category": "logging", "description": "", "tags": [], "content": "# Tail System Logs\n\nYou are helping the user monitor system logs in real-time for debugging and system monitoring.\n\n## Task\n\n1. **Follow all system logs:**\n ```bash\n # Follow journal in real-time\n journalctl -f\n\n # Follow with timestamp\n journalctl -f -o short-precise\n\n # Follow only errors and above\n journalctl -f -p err\n ```\n\n2. **Follow specific services:**\n ```bash\n # Specific service\n journalctl -u SERVICE_NAME -f\n\n # Multiple services\n journalctl -u NetworkManager -u systemd-resolved -f\n\n # Example: Common services to monitor\n journalctl -u sddm -u plasmashell -f # KDE\n journalctl -u gdm -u gnome-shell -f # GNOME\n ```\n\n3. **Follow kernel messages:**\n ```bash\n # Kernel ring buffer\n dmesg -w\n\n # Kernel logs from journal\n journalctl -k -f\n\n # Specific kernel subsystem (e.g., USB)\n dmesg -w | grep -i usb\n ```\n\n4. **Follow authentication logs:**\n ```bash\n # Auth attempts\n journalctl -u ssh -u sudo -f\n\n # Login attempts\n journalctl _SYSTEMD_UNIT=systemd-logind.service -f\n\n # Traditional auth log (if available)\n tail -f /var/log/auth.log\n ```\n\n5. **Follow application logs:**\n ```bash\n # X11 session\n tail -f ~/.xsession-errors\n\n # Wayland session\n journalctl --user -f\n\n # Specific application\n journalctl -f | grep -i \"application-name\"\n ```\n\n6. **Follow with filtering:**\n ```bash\n # Only show errors/warnings\n journalctl -f -p warning\n\n # Filter by identifier\n journalctl -f -t identifier-name\n\n # Specific priority range\n journalctl -f -p err..warning\n\n # Grep for specific terms\n journalctl -f | grep -i \"error\\|fail\\|critical\"\n ```\n\n7. **Multi-pane log viewing:**\n ```bash\n # Using tmux to watch multiple logs\n tmux new-session -s logs \\; \\\n split-window -v \\; \\\n split-window -h \\; \\\n select-pane -t 0 \\; \\\n send-keys 'journalctl -f -p err' C-m \\; \\\n select-pane -t 1 \\; \\\n send-keys 'dmesg -w' C-m \\; \\\n select-pane -t 2 \\; \\\n send-keys 'journalctl -u NetworkManager -f' C-m\n ```\n\n8. **Follow with context:**\n ```bash\n # Last 100 lines plus new\n journalctl -n 100 -f\n\n # Since specific time\n journalctl --since \"10 minutes ago\" -f\n\n # This boot plus new\n journalctl -b -f\n ```\n\n9. **Custom log monitoring script:**\n ```bash\n cat > /tmp/log-monitor.sh << 'EOF'\n #!/bin/bash\n\n # Colors\n RED='\\033[0;31m'\n YELLOW='\\033[1;33m'\n NC='\\033[0m' # No Color\n\n echo \"Monitoring system logs for critical events...\"\n echo \"Press Ctrl+C to stop\"\n echo \"\"\n\n journalctl -f -o short-precise -p warning | while read line; do\n if echo \"$line\" | grep -qi \"error\\|fail\\|critical\"; then\n echo -e \"${RED}$line${NC}\"\n elif echo \"$line\" | grep -qi \"warning\\|warn\"; then\n echo -e \"${YELLOW}$line${NC}\"\n else\n echo \"$line\"\n fi\n done\n EOF\n\n chmod +x /tmp/log-monitor.sh\n /tmp/log-monitor.sh\n ```\n\n10. **Interactive log browser:**\n ```bash\n # Use journalctl with cursor navigation\n journalctl --no-pager -n 1000 | less +G\n\n # Or use GUI log viewer\n ksystemlog # KDE\n gnome-logs # GNOME\n ```\n\n## Common Monitoring Scenarios\n\n**Debugging boot issues:**\n```bash\n# Watch boot process (from another TTY or SSH)\njournalctl -b -f\n```\n\n**Network troubleshooting:**\n```bash\njournalctl -u NetworkManager -u systemd-resolved -u wpa_supplicant -f\n```\n\n**Display/GPU issues:**\n```bash\njournalctl -f | grep -iE \"drm|amdgpu|nvidia|wayland|xorg\"\n```\n\n**USB device debugging:**\n```bash\ndmesg -w | grep -i usb\n```\n\n**Bluetooth issues:**\n```bash\njournalctl -u bluetooth -f\n```\n\n**Audio problems:**\n```bash\njournalctl --user -u pipewire -u wireplumber -f\n```\n\n**Package installation monitoring:**\n```bash\njournalctl -u apt-daily -u apt-daily-upgrade -f\n```\n\n## Log Rotation & Management\n\n```bash\n# Check journal size\njournalctl --disk-usage\n\n# Vacuum old logs\nsudo journalctl --vacuum-time=7d\nsudo journalctl --vacuum-size=500M\n\n# View available boots\njournalctl --list-boots\n\n# Follow logs from previous boot\njournalctl -b -1 -f\n```\n\n## Alternative Log Files\n\nSome systems still use traditional log files:\n```bash\n# System log\ntail -f /var/log/syslog\n\n# Kernel log\ntail -f /var/log/kern.log\n\n# Authentication\ntail -f /var/log/auth.log\n\n# Package management\ntail -f /var/log/dpkg.log\ntail -f /var/log/apt/history.log\n\n# X11\ntail -f /var/log/Xorg.0.log\n```\n\n## Troubleshooting\n\n**Journal not persistent:**\n- Check `/var/log/journal/` exists\n- Run: `sudo mkdir -p /var/log/journal && sudo systemctl restart systemd-journald`\n\n**Too much log output:**\n- Increase filter priority: `-p err` instead of `-p info`\n- Filter by unit: `-u specific-service`\n- Use grep to focus on specific issues\n\n**Logs filling disk:**\n- Set limit in `/etc/systemd/journald.conf`:\n ```\n SystemMaxUse=500M\n ```\n- Restart journald: `sudo systemctl restart systemd-journald`\n\n## Notes\n\n- Use `-o verbose` for maximum detail\n- Use `-o json` for machine-readable output\n- Use `-o cat` for just the message without metadata\n- Ctrl+C to stop following logs\n- Consider using `multitail` for advanced multi-log viewing\n- Set `--lines=` or `-n` to control how much history to show initially\n", "filepath": "logging/tail-system-logs.md" } ], "media": [ { "name": "check-codecs", "category": "media", "description": "Evaluate installed media codecs on the computer", "tags": [ "media", "codecs", "audio", "video", "system", "project", "gitignored" ], "content": "You are helping the user evaluate what media codecs are installed on their system.\n\n## Process\n\n1. **Check GStreamer plugins**\n - List GStreamer plugins: `gst-inspect-1.0 | grep -i plugin`\n - Check installed GStreamer packages:\n ```bash\n dpkg -l | grep -E \"gstreamer.*plugin\"\n ```\n - Key packages:\n - `gstreamer1.0-plugins-base` (essential)\n - `gstreamer1.0-plugins-good` (common formats)\n - `gstreamer1.0-plugins-bad` (additional)\n - `gstreamer1.0-plugins-ugly` (patent-encumbered)\n - `gstreamer1.0-libav` (FFmpeg integration)\n\n2. **Check FFmpeg codecs**\n - List FFmpeg codecs: `ffmpeg -codecs 2>/dev/null | head -50`\n - List encoders: `ffmpeg -encoders 2>/dev/null | head -20`\n - List decoders: `ffmpeg -decoders 2>/dev/null | head -20`\n - Check FFmpeg version: `ffmpeg -version`\n\n3. **Check VA-API support (hardware acceleration)**\n - Check VA-API: `vainfo`\n - For AMD: Should show ROCm/RADV support\n - Verify hardware encoding/decoding support\n\n4. **Check for common codec packages**\n ```bash\n dpkg -l | grep -E \"libavcodec|libavformat|libavutil|x264|x265|vp9|opus|aac|mp3\"\n ```\n\n5. **Test codec support**\n - Video codecs to verify:\n - H.264/AVC (most common)\n - H.265/HEVC (4K content)\n - VP8/VP9 (WebM)\n - AV1 (modern codec)\n - Audio codecs to verify:\n - MP3\n - AAC\n - Opus\n - FLAC\n - Vorbis\n\n6. **Identify missing codecs**\n - Common needs:\n - DVD playback: `libdvd-pkg`\n - Proprietary formats: `ubuntu-restricted-extras`\n - H.265 encoding: `x265`\n - AV1: `libaom3`, `libdav1d-dev`\n\n7. **Suggest installations**\n\n **For comprehensive codec support:**\n ```bash\n sudo apt install ubuntu-restricted-extras\n sudo apt install ffmpeg\n sudo apt install gstreamer1.0-plugins-{base,good,bad,ugly}\n sudo apt install gstreamer1.0-libav\n sudo apt install gstreamer1.0-vaapi # Hardware acceleration\n ```\n\n **For DVD:**\n ```bash\n sudo apt install libdvd-pkg\n sudo dpkg-reconfigure libdvd-pkg\n ```\n\n8. **Check browser codec support**\n - Visit: `https://www.youtube.com/html5`\n - Shows which codecs browser supports\n - Check hardware acceleration in browsers\n\n## Output\n\nProvide a report showing:\n- Installed GStreamer plugins\n- FFmpeg codec support\n- Hardware acceleration status (VA-API)\n- Missing common codecs\n- Installation recommendations\n- Browser codec support status", "filepath": "media/check-codecs.md" } ], "network": [ { "name": "diagnose-lan-connectivity", "category": "network", "description": "Diagnose LAN connectivity issues by pinging gateway and testing network", "tags": [ "network", "diagnostics", "connectivity", "gateway", "troubleshooting", "project", "gitignored" ], "content": "You are helping the user diagnose LAN connectivity issues.\n\n## Process\n\n1. **Identify network configuration**\n - Run `ip addr show` to check network interfaces\n - Run `ip route show` to identify default gateway\n - Check DNS servers: `cat /etc/resolv.conf`\n\n2. **Test gateway connectivity**\n - Ping default gateway: `ping -c 4 `\n - If gateway is unreachable, check:\n - Network interface status: `ip link show`\n - NetworkManager status: `nmcli device status`\n - Physical connection (if applicable)\n\n3. **Test DNS resolution**\n - Test DNS lookup: `nslookup google.com`\n - Try alternative DNS: `nslookup google.com 8.8.8.8`\n - Check if DNS is the issue\n\n4. **Test external connectivity**\n - Ping external IP: `ping -c 4 8.8.8.8`\n - Ping domain name: `ping -c 4 google.com`\n - Traceroute to identify where packets stop: `traceroute google.com`\n\n5. **Check for common issues**\n - Firewall blocking: `sudo ufw status`\n - IP conflicts: `arp -a` (look for duplicate IPs)\n - DHCP issues: Check if IP is self-assigned (169.254.x.x)\n\n6. **Advanced diagnostics**\n - Check routing table: `ip route show`\n - Monitor network traffic: `sudo tcpdump -i -c 20`\n - Check for packet loss: `mtr `\n\n## Output\n\nProvide a diagnostic report showing:\n- Network configuration summary\n- Gateway reachability status\n- DNS resolution status\n- External connectivity status\n- Identified issues (if any)\n- Recommended fixes", "filepath": "network/diagnose-lan-connectivity.md" }, { "name": "scan-lan", "category": "network", "description": "Scan local network using ARP and produce a LAN map", "tags": [ "network", "diagnostics", "lan", "arp", "scanning", "project", "gitignored" ], "content": "You are helping the user scan their local network and create a comprehensive LAN map.\n\n## Process\n\n1. **Identify network interface and subnet**\n - Run `ip route | grep default` to find default gateway\n - Run `ip addr show` to identify active network interface and IP\n - Determine subnet (likely 10.0.0.0/24 based on Daniel's setup)\n\n2. **Perform ARP scan**\n - Run `arp -a` to see current ARP cache\n - For more comprehensive scan, use `sudo arp-scan --localnet` (install if needed: `sudo apt install arp-scan`)\n - Alternative: `sudo nmap -sn 10.0.0.0/24` for network sweep\n\n3. **Gather detailed information**\n - For each discovered host, attempt to:\n - Get hostname: `nslookup `\n - Identify device type if possible (router, printer, etc.)\n - Check if SSH is accessible: `timeout 2 nc -z 22`\n\n4. **Create LAN map**\n - Organize discovered devices by:\n - IP address\n - MAC address\n - Hostname (if available)\n - Device type (if identifiable)\n - Open ports/services (if detected)\n\n5. **Save results**\n - Offer to save the LAN map to `~/ai-docs/network/lan-map-$(date +%Y%m%d).md`\n - Include timestamp and subnet information\n\n## Output\n\nPresent the LAN map in a clear table format showing:\n- IP Address\n- MAC Address\n- Hostname\n- Device Type/Notes\n- Status (active/inactive)\n\nInclude summary statistics (total devices, device type breakdown).", "filepath": "network/scan-lan.md" } ], "optimisation": [ { "name": "large-files", "category": "optimisation", "description": "", "tags": [], "content": "Your task is to run a storage audit of this desktop.\n\nYou should identify which folders are occupying the most file storage. \n\nThen:\n\nTry to identify any obviously superfluous files to the user (or folders) and suggest them for deletion. ", "filepath": "optimisation/large-files.md" }, { "name": "optimize-boot-speed", "category": "optimisation", "description": "", "tags": [], "content": "You are optimizing system boot speed by identifying and remediating slow or hanging processes.\n\n## Your Task\n\n1. **Analyze boot performance** using systemd-analyze:\n - `systemd-analyze` - Show total boot time\n - `systemd-analyze blame` - List services by boot time impact\n - `systemd-analyze critical-chain` - Show critical path bottlenecks\n - `systemd-analyze plot > boot-analysis.svg` - Generate visual timeline (optional)\n\n2. **Identify slow services**:\n - Services taking > 5 seconds to start\n - Services in the critical boot path causing delays\n - Services with timeout issues\n - Parallel vs sequential loading issues\n\n3. **Detect hanging processes**:\n - Check for services waiting on timeouts\n - Identify dependency chain bottlenecks\n - Look for failed network mounts or remote resources\n - Find services that could be started later (after boot completes)\n\n4. **Categorize optimization opportunities**:\n - **Disable**: Unnecessary services that can be completely disabled\n - **Delay**: Services that can use `After=network-online.target` or similar\n - **Parallel**: Services that could start in parallel instead of sequentially\n - **Configure**: Services needing timeout or dependency adjustments\n\n5. **Propose specific optimizations**:\n - Provide exact `systemctl` commands to implement changes\n - Explain the impact and safety of each change\n - Suggest configuration tweaks for slow services\n - Recommend masking vs disabling where appropriate\n\n## Key Commands\n\n- `systemd-analyze time` - Overall boot time breakdown\n- `systemd-analyze blame` - Time taken by each unit\n- `systemd-analyze critical-chain` - Critical path analysis\n- `systemctl list-dependencies --before` - What loads before a service\n- `systemctl list-dependencies --after` - What loads after a service\n- `journalctl -b | grep -i timeout` - Find timeout issues\n- `systemctl show --property=TimeoutStartUSec` - Check timeout settings\n\n## Output Format\n\n1. **Boot Performance Summary**:\n - Total boot time\n - Kernel, userspace, and firmware times\n - Comparison to typical boot times\n\n2. **Top Boot Time Offenders** (services > 3 seconds):\n - Service name and time taken\n - What the service does\n - Whether it's essential\n\n3. **Hanging/Timeout Issues**:\n - Services with timeout problems\n - Root cause analysis\n - Recommended fixes\n\n4. **Optimization Recommendations**:\n - Prioritized list of changes (high to low impact)\n - Specific commands to execute\n - Expected time savings\n - Risk assessment for each change\n\n5. **Implementation Plan**:\n - Step-by-step instructions\n - Backup/rollback procedures\n - Testing recommendations\n\nBe specific and actionable. Always explain the safety and reversibility of proposed changes.\n", "filepath": "optimisation/optimize-boot-speed.md" } ], "package-management": [ { "name": "check-apt-health", "category": "package-management", "description": "", "tags": [], "content": "# APT Package Manager Health Check\n\nYou are helping the user ensure that the APT package manager on Ubuntu is in good working health and remove any broken third-party repositories or packages.\n\n## Your tasks:\n\n1. **Check basic APT functionality:**\n - Update package lists: `sudo apt update`\n - Check for errors in output\n - Verify cache state: `apt-cache policy`\n\n2. **Check for broken packages:**\n - List broken packages: `dpkg -l | grep \"^..r\"`\n - Check for unconfigured packages: `dpkg -l | grep \"^..c\"`\n - Check dpkg status: `sudo dpkg --configure -a`\n - Check for broken dependencies: `sudo apt-get check`\n\n3. **Identify problematic repositories:**\n - List all repositories:\n ```bash\n grep -r --include '*.list' '^deb ' /etc/apt/sources.list /etc/apt/sources.list.d/\n ```\n - Check for failing repositories during update:\n ```bash\n sudo apt update 2>&1 | grep -i \"fail\\|error\\|warning\"\n ```\n - List third-party PPAs:\n ```bash\n ls /etc/apt/sources.list.d/\n ```\n\n4. **Check APT cache integrity:**\n - Check cache size: `du -sh /var/cache/apt/archives/`\n - List problematic cache entries:\n ```bash\n sudo apt-get clean\n sudo apt-get autoclean\n ```\n\n5. **Fix broken dependencies:**\n - Attempt to fix broken packages:\n ```bash\n sudo apt --fix-broken install\n ```\n - Force reconfiguration of all packages:\n ```bash\n sudo dpkg --configure -a\n ```\n - Try to complete interrupted installations:\n ```bash\n sudo apt-get -f install\n ```\n\n6. **Identify and handle broken third-party repositories:**\n For each failing repository found:\n - Ask user if they still need it\n - If not needed, disable or remove:\n ```bash\n sudo add-apt-repository --remove ppa:\n ```\n - Or manually remove: `sudo rm /etc/apt/sources.list.d/.list`\n - Or disable by commenting out: `sudo sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/.list`\n\n7. **Check for GPG key issues:**\n - Check for missing GPG keys:\n ```bash\n sudo apt update 2>&1 | grep \"NO_PUBKEY\"\n ```\n - If missing keys found, attempt to import:\n ```bash\n sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys \n ```\n - List all trusted keys: `apt-key list`\n\n8. **Check for duplicate repositories:**\n - Find duplicates:\n ```bash\n grep -h \"^deb \" /etc/apt/sources.list /etc/apt/sources.list.d/* | sort | uniq -d\n ```\n - Remove duplicates manually or ask user which to keep\n\n9. **Check disk space:**\n - Disk space in /var: `df -h /var`\n - If low on space:\n ```bash\n sudo apt-get clean\n sudo apt-get autoclean\n sudo apt-get autoremove\n ```\n\n10. **Check for held packages:**\n - List held packages: `apt-mark showhold`\n - These packages won't be upgraded - ask user if intentional\n - To unhold: `sudo apt-mark unhold `\n\n11. **Verify repository configurations:**\n - Check main sources.list: `cat /etc/apt/sources.list`\n - Ensure official Ubuntu repositories are present:\n - main\n - restricted\n - universe\n - multiverse\n - security updates\n - updates\n - backports (optional)\n\n12. **Check for obsolete packages:**\n - List locally installed packages not in any repository:\n ```bash\n aptitude search '~o'\n ```\n - Or using apt: `apt list '~o'`\n\n13. **Verify package authentication:**\n - Check if packages are being verified:\n ```bash\n grep -r \"APT::Get::AllowUnauthenticated\" /etc/apt/\n ```\n - Should be \"false\" or not present for security\n\n14. **Run full system check:**\n - Check for consistency: `sudo apt-get check`\n - Simulate upgrade to check for issues: `sudo apt-get -s upgrade`\n - Simulate dist-upgrade: `sudo apt-get -s dist-upgrade`\n\n15. **Clean up:**\n - Remove old packages: `sudo apt-get autoremove`\n - Clean package cache: `sudo apt-get clean`\n - Clean old cached packages: `sudo apt-get autoclean`\n\n16. **Reset APT if severely broken:**\n If APT is severely corrupted, may need to:\n ```bash\n # Backup current sources\n sudo cp -r /etc/apt /etc/apt.backup\n\n # Reset dpkg\n sudo dpkg --clear-avail\n sudo apt-get update\n\n # Reinstall base packages if needed\n sudo apt-get install --reinstall apt dpkg\n ```\n\n17. **Check APT configuration files:**\n - List all APT config: `apt-config dump`\n - Check for problematic configurations in:\n - `/etc/apt/apt.conf`\n - `/etc/apt/apt.conf.d/`\n - Look for unusual proxy settings, deprecated options\n\n18. **Report findings:**\n Summarize:\n - Number of broken packages (if any)\n - Problematic repositories (outdated PPAs, failing repos)\n - Missing GPG keys\n - Dependency issues\n - Disk space issues\n - Held packages\n - Overall APT health status (HEALTHY / NEEDS ATTENTION / BROKEN)\n\n19. **Provide recommendations:**\n - List of repositories to remove\n - Packages to fix or remove\n - Whether full system upgrade is recommended\n - Cleanup commands to run\n - Any configuration changes needed\n - If APT is healthy, suggest regular maintenance:\n ```bash\n sudo apt update && sudo apt upgrade\n sudo apt autoremove\n sudo apt clean\n ```\n\n## Important notes:\n- Always backup before removing repositories or packages\n- Don't remove dependencies of packages user needs\n- Some third-party repos may be intentionally added - confirm before removing\n- Be cautious with --fix-broken - it may remove packages\n- Check if user is running unsupported Ubuntu version (EOL)\n- PPAs may lag behind Ubuntu releases\n- sudo is required for most operations\n- After major fixes, suggest reboot to ensure clean state\n", "filepath": "package-management/check-apt-health.md" }, { "name": "check-third-party-repos", "category": "package-management", "description": "Identify packages from third-party repos that may be available in official repos", "tags": [ "system", "packages", "repositories", "optimization", "project", "gitignored" ], "content": "You are helping the user identify packages installed from third-party repositories that might now be available in official Ubuntu repos.\n\n## Process\n\n1. **List all configured repositories**\n - Check `/etc/apt/sources.list`\n - Check `/etc/apt/sources.list.d/*`\n - Identify which are third-party (PPAs, custom repos)\n\n2. **Identify packages from third-party sources**\n - Run: `apt list --installed | grep -v \"ubuntu\\|debian\"`\n - For each PPA, find packages: `apt-cache policy ` shows source\n\n3. **Check official repo availability**\n - For each third-party package:\n - Check if available in Ubuntu repos: `apt-cache policy `\n - Compare versions (official might be newer or older)\n - Note if it's in `universe`, `multiverse`, or `main`\n\n4. **Common candidates for migration**\n - Development tools (git, docker, etc.)\n - Media codecs\n - Drivers (graphics, etc.)\n - Programming languages (Python, Node.js, etc.)\n\n5. **Evaluate risks and benefits**\n - Official repos: More stable, better security updates\n - PPAs: Often newer versions, specific features\n - Suggest migration if:\n - Official version is adequate\n - PPA is unmaintained\n - Security concerns\n\n6. **Create migration plan**\n - For packages to migrate:\n - Remove third-party package\n - Remove PPA if no longer needed\n - Install from official repo\n - Test functionality\n\n## Output\n\nProvide a report showing:\n- List of third-party repositories in use\n- Packages installed from each third-party source\n- Which packages are available in official repos\n- Version comparison\n- Migration recommendations with commands\n- Warnings about potential breaking changes", "filepath": "package-management/check-third-party-repos.md" }, { "name": "configure-auto-updates", "category": "package-management", "description": "", "tags": [], "content": "# Configure Ubuntu Auto-Updates\n\nYou are helping the user configure automatic updates for Ubuntu.\n\n## Your tasks:\n\n1. **Check current update configuration:**\n - Check if unattended-upgrades is installed: `dpkg -l | grep unattended-upgrades`\n - Current configuration: `cat /etc/apt/apt.conf.d/50unattended-upgrades`\n - Check if auto-updates are enabled: `cat /etc/apt/apt.conf.d/20auto-upgrades`\n - Update check frequency: `cat /etc/apt/apt.conf.d/10periodic`\n\n2. **Install unattended-upgrades if not present:**\n ```bash\n sudo apt update\n sudo apt install unattended-upgrades apt-listchanges\n ```\n\n3. **Ask user about their update preferences:**\n Discuss with the user:\n - **Security updates only** (recommended, safest)\n - **Security + recommended updates**\n - **All updates** (risky for production systems)\n - **Update frequency**: daily, weekly\n - **Auto-reboot preference**: never, only for security, scheduled time\n - **Email notifications** (if configured)\n\n4. **Configure update types:**\n Edit `/etc/apt/apt.conf.d/50unattended-upgrades`:\n\n For security updates only (recommended):\n ```\n Unattended-Upgrade::Allowed-Origins {\n \"${distro_id}:${distro_codename}-security\";\n };\n ```\n\n For security + updates:\n ```\n Unattended-Upgrade::Allowed-Origins {\n \"${distro_id}:${distro_codename}-security\";\n \"${distro_id}:${distro_codename}-updates\";\n };\n ```\n\n5. **Configure automatic reboot settings:**\n In `/etc/apt/apt.conf.d/50unattended-upgrades`, configure:\n\n **Never auto-reboot (safest):**\n ```\n Unattended-Upgrade::Automatic-Reboot \"false\";\n ```\n\n **Auto-reboot when required:**\n ```\n Unattended-Upgrade::Automatic-Reboot \"true\";\n Unattended-Upgrade::Automatic-Reboot-Time \"02:00\";\n ```\n\n **Only reboot if no users logged in:**\n ```\n Unattended-Upgrade::Automatic-Reboot-WithUsers \"false\";\n ```\n\n6. **Configure email notifications (optional):**\n If user wants email notifications:\n ```\n Unattended-Upgrade::Mail \"user@example.com\";\n Unattended-Upgrade::MailReport \"on-change\"; // or \"always\" or \"only-on-error\"\n ```\n\n Note: Requires mail system configured (postfix, sendmail, etc.)\n\n7. **Enable automatic updates:**\n Create/edit `/etc/apt/apt.conf.d/20auto-upgrades`:\n ```\n APT::Periodic::Update-Package-Lists \"1\";\n APT::Periodic::Download-Upgradeable-Packages \"1\";\n APT::Periodic::AutocleanInterval \"7\";\n APT::Periodic::Unattended-Upgrade \"1\";\n ```\n\n Explanation:\n - `Update-Package-Lists`: Update package list (1=daily)\n - `Download-Upgradeable-Packages`: Pre-download updates (1=daily)\n - `AutocleanInterval`: Clean up old packages (7=weekly)\n - `Unattended-Upgrade`: Actually install updates (1=daily)\n\n8. **Configure blacklist (packages to exclude):**\n In `/etc/apt/apt.conf.d/50unattended-upgrades`:\n ```\n Unattended-Upgrade::Package-Blacklist {\n \"linux-image-*\"; // Example: don't auto-update kernel\n \"nvidia-*\"; // Example: don't auto-update GPU drivers\n };\n ```\n\n Ask user if there are specific packages they want to exclude.\n\n9. **Test configuration:**\n - Check configuration syntax:\n ```bash\n sudo unattended-upgrades --dry-run --debug\n ```\n - View what would be updated:\n ```bash\n sudo unattended-upgrade --dry-run\n ```\n\n10. **Set up monitoring:**\n - Check logs: `cat /var/log/unattended-upgrades/unattended-upgrades.log`\n - Check dpkg log: `cat /var/log/dpkg.log`\n - Monitor update service status: `systemctl status unattended-upgrades.service`\n\n11. **Configure additional safety options:**\n In `/etc/apt/apt.conf.d/50unattended-upgrades`:\n ```\n // Remove unused dependencies\n Unattended-Upgrade::Remove-Unused-Dependencies \"true\";\n\n // Remove unused kernel packages\n Unattended-Upgrade::Remove-Unused-Kernel-Packages \"true\";\n\n // Automatically remove new unused dependencies\n Unattended-Upgrade::Remove-New-Unused-Dependencies \"true\";\n\n // Split the upgrade into smallest possible chunks\n Unattended-Upgrade::MinimalSteps \"true\";\n\n // Install updates when on AC power only\n Unattended-Upgrade::OnlyOnACPower \"true\"; // laptops only\n ```\n\n12. **Set up pre/post-update hooks (optional):**\n If user wants custom actions before/after updates:\n ```\n Unattended-Upgrade::PreUpdate \"echo 'Starting updates' | logger\";\n Unattended-Upgrade::PostUpdate \"echo 'Updates complete' | logger\";\n ```\n\n13. **Enable and start the service:**\n ```bash\n sudo systemctl enable unattended-upgrades\n sudo systemctl start unattended-upgrades\n sudo systemctl status unattended-upgrades\n ```\n\n14. **Manual trigger for testing:**\n ```bash\n sudo unattended-upgrade -d\n ```\n\n15. **Provide best practices and recommendations:**\n - **Desktops/Workstations**: Security updates only, no auto-reboot\n - **Servers**: Security updates only, scheduled reboot window if needed\n - **Laptops**: Same as desktop, plus OnlyOnACPower option\n - **Production systems**: Manual updates preferred, or extensive testing\n - Always check logs periodically: `/var/log/unattended-upgrades/`\n - Test in non-production environment first\n - Keep kernel packages in blacklist if you want manual control\n - Consider using livepatch for kernel updates without rebooting\n - Set up email notifications for important systems\n - Monitor disk space - updates require free space\n\n16. **Show how to check what's configured:**\n ```bash\n # View current configuration\n apt-config dump APT::Periodic\n\n # Check when updates last ran\n ls -la /var/lib/apt/periodic/\n\n # View update history\n cat /var/log/unattended-upgrades/unattended-upgrades.log\n ```\n\n## Important notes:\n- Backup configuration files before editing\n- Test with --dry-run before enabling\n- Auto-reboot can be disruptive - configure carefully\n- Email requires MTA (mail system) configured\n- Updates consume bandwidth and disk space\n- Some updates may break custom configurations\n- Keep an eye on logs after enabling\n- Security updates are generally safe to auto-install\n- Feature updates may require testing\n", "filepath": "package-management/configure-auto-updates.md" }, { "name": "evaluate-installed-software", "category": "package-management", "description": "Evaluate installed software and suggest complementary CLIs or GUIs", "tags": [ "system", "audit", "software", "recommendations", "optimization", "project", "gitignored" ], "content": "You are helping the user evaluate their installed software and suggest complementary tools.\n\n## Process\n\n1. **Inventory installed software**\n - APT packages: `apt list --installed | wc -l`\n - Snap packages: `snap list`\n - Flatpak packages: `flatpak list`\n - pip packages: `pip list`\n - Manually installed in `~/programs`\n\n2. **Categorize software**\n - Development tools\n - Media/graphics applications\n - System utilities\n - Communication tools\n - AI/ML tools\n - Backup/storage tools\n\n3. **Identify gaps and complementary tools**\n - For each category, suggest:\n - Missing CLIs that complement existing GUIs\n - Missing GUIs that complement existing CLIs\n - Alternative tools that might be better suited\n - Modern replacements for outdated tools\n\n4. **Examples of complementary suggestions**\n - If `docker` installed, suggest `lazydocker` GUI\n - If `git` installed, suggest `gitui` or `lazygit`\n - If `code` (VS Code) installed, suggest useful extensions\n - If media editing tools installed, suggest codec packages\n - If Python installed, suggest `pipx` for isolated CLI tools\n\n5. **Present recommendations**\n - Group by category\n - Explain the benefit of each suggestion\n - Prioritize based on user's existing software patterns\n\n## Output\n\nProvide a report showing:\n- Summary of installed software by category\n- List of recommended complementary tools\n- Brief explanation of why each tool would be useful\n- Installation commands for suggested tools", "filepath": "package-management/evaluate-installed-software.md" }, { "name": "identify-unused-packages", "category": "package-management", "description": "Identify packages user hasn't used recently and may wish to remove", "tags": [ "system", "cleanup", "packages", "optimization", "project", "gitignored" ], "content": "You are helping the user identify unused packages that could be removed to free up space.\n\n## Process\n\n1. **Check package installation dates**\n - For APT packages: `ls -lt /var/lib/dpkg/info/*.list | tail -50`\n - Check package access times if available\n\n2. **Identify large packages**\n - List by size: `dpkg-query -W -f='${Installed-Size}\\t${Package}\\n' | sort -rn | head -30`\n - Focus on large packages that might be unused\n\n3. **Check Flatpak packages**\n - List Flatpaks: `flatpak list --app`\n - Check Flatpak size: `flatpak list --app --columns=name,application,size`\n - Suggest running: `flatpak uninstall --unused` to remove unused runtimes\n\n4. **Check Snap packages**\n - List snaps: `snap list`\n - Check snap disk usage: `du -sh /var/lib/snapd/snaps`\n - Identify old snap revisions: `snap list --all | grep disabled`\n - Suggest: `snap remove --purge `\n\n5. **Identify orphaned packages (APT)**\n - Find orphaned packages: `deborphan`\n - Check apt autoremove suggestions: `apt autoremove --dry-run`\n\n6. **Check for development packages**\n - List `-dev` packages: `dpkg -l | grep -E \"^ii.*-dev\"`\n - Ask user if they're actively developing and need these\n\n7. **Review by category**\n - Games (if user doesn't game)\n - Old kernels: `dpkg -l | grep linux-image`\n - Language packs not needed\n - Documentation packages (`-doc` suffix)\n\n8. **Present findings to user**\n - Group by category and size\n - Estimate space that could be freed\n - Ask user to confirm before suggesting removal\n\n## Output\n\nProvide a report showing:\n- Total number of installed packages\n- Potentially unused packages by category\n- Space that could be freed\n- Safe removal suggestions\n- Warning about packages to NOT remove (dependencies)", "filepath": "package-management/identify-unused-packages.md" } ], "peripherals": [ { "name": "list-usb-devices", "category": "peripherals", "description": "", "tags": [], "content": "# List USB Devices\n\nYou are helping the user view all connected USB devices with detailed information.\n\n## Task\n\n1. **Basic USB device listing:**\n ```bash\n # Simple list\n lsusb\n\n # With tree structure showing hubs\n lsusb -t\n\n # Verbose output\n lsusb -v | less\n ```\n\n2. **Detailed information for each device:**\n ```bash\n # Iterate through all devices\n for device in $(lsusb | awk '{print $2\":\"$4}' | sed 's/:$//' | tr ':' '/'); do\n bus=$(echo $device | cut -d'/' -f1)\n dev=$(echo $device | cut -d'/' -f2)\n echo \"=== Device: Bus $bus Device $dev ===\"\n lsusb -v -s $bus:$dev 2>/dev/null | head -30\n echo \"\"\n done\n ```\n\n3. **Show USB devices by type:**\n ```bash\n echo \"=== Input Devices (Keyboards, Mice) ===\"\n lsusb | grep -iE \"keyboard|mouse|input\"\n\n echo -e \"\\n=== Storage Devices ===\"\n lsusb | grep -iE \"storage|disk|flash|card reader\"\n\n echo -e \"\\n=== Audio Devices ===\"\n lsusb | grep -iE \"audio|sound|headset|microphone\"\n\n echo -e \"\\n=== Video Devices ===\"\n lsusb | grep -iE \"camera|video|webcam\"\n\n echo -e \"\\n=== Bluetooth Adapters ===\"\n lsusb | grep -i bluetooth\n\n echo -e \"\\n=== Network Adapters ===\"\n lsusb | grep -iE \"network|ethernet|wifi|802.11\"\n ```\n\n4. **USB device details from sysfs:**\n ```bash\n # List all USB devices with details\n for dev in /sys/bus/usb/devices/*; do\n if [ -f \"$dev/manufacturer\" ] && [ -f \"$dev/product\" ]; then\n echo \"Device: $(cat $dev/product 2>/dev/null)\"\n echo \"Manufacturer: $(cat $dev/manufacturer 2>/dev/null)\"\n echo \"Serial: $(cat $dev/serial 2>/dev/null)\"\n echo \"Speed: $(cat $dev/speed 2>/dev/null) Mbps\"\n echo \"---\"\n fi\n done\n ```\n\n5. **USB device mount points and block devices:**\n ```bash\n # Show USB storage devices\n lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,MODEL | grep -i usb\n\n # USB device names in /dev\n ls -l /dev/sd* /dev/nvme* 2>/dev/null | grep -E \"^b\"\n\n # Detailed block device info\n lsblk -f\n ```\n\n6. **USB power consumption:**\n ```bash\n # Check power usage\n for device in /sys/bus/usb/devices/*/power/active_duration; do\n dev=$(dirname $(dirname $device))\n if [ -f \"$dev/product\" ]; then\n echo \"$(cat $dev/product): Power: $(cat $dev/power/level) Active: $(cat $device)ms\"\n fi\n done\n ```\n\n7. **USB device speed and capabilities:**\n ```bash\n # USB version and speed\n for dev in /sys/bus/usb/devices/usb*; do\n echo \"USB Bus $(basename $dev):\"\n echo \" Version: $(cat $dev/version 2>/dev/null)\"\n echo \" Speed: $(cat $dev/speed 2>/dev/null) Mbps\"\n echo \" Max Child: $(cat $dev/maxchild 2>/dev/null) ports\"\n echo \"\"\n done\n ```\n\n8. **Create formatted device report:**\n ```bash\n cat > /tmp/usb-devices.txt << EOF\n USB Devices Report\n ==================\n Generated: $(date)\n Hostname: $(hostname)\n\n === Connected USB Devices ===\n EOF\n\n lsusb >> /tmp/usb-devices.txt\n echo -e \"\\n=== USB Device Tree ===\" >> /tmp/usb-devices.txt\n lsusb -t >> /tmp/usb-devices.txt\n\n echo -e \"\\n=== USB Storage Devices ===\" >> /tmp/usb-devices.txt\n lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,MODEL | grep -i usb >> /tmp/usb-devices.txt\n\n echo -e \"\\n=== USB Device Details ===\" >> /tmp/usb-devices.txt\n usb-devices >> /tmp/usb-devices.txt\n\n cat /tmp/usb-devices.txt\n ```\n\n9. **Check USB controller info:**\n ```bash\n # List USB controllers\n lspci | grep -i usb\n\n # Detailed controller info\n for controller in $(lspci | grep -i usb | cut -d' ' -f1); do\n echo \"=== Controller $controller ===\"\n lspci -v -s $controller | head -20\n echo \"\"\n done\n ```\n\n10. **Monitor USB events in real-time:**\n ```bash\n # Watch for USB connect/disconnect\n udevadm monitor --subsystem-match=usb\n\n # Or use dmesg\n dmesg -w | grep -i usb\n ```\n\n11. **Check USB autosuspend settings:**\n ```bash\n # Check which devices have autosuspend enabled\n for dev in /sys/bus/usb/devices/*/power/control; do\n device=$(dirname $(dirname $dev))\n if [ -f \"$device/product\" ]; then\n echo \"$(cat $device/product): $(cat $dev)\"\n fi\n done\n ```\n\n12. **Find USB device by vendor/product ID:**\n ```bash\n # Search by ID\n # Example: Find device 046d:c52b\n lsusb -d 046d:c52b -v\n\n # Find all devices from vendor\n lsusb -d 046d: # Logitech devices\n ```\n\n13. **Check USB permissions:**\n ```bash\n # Show device permissions\n ls -l /dev/bus/usb/*/* | head -20\n\n # Find device files for specific bus\n ls -l /dev/bus/usb/001/\n\n # Check udev rules for USB\n ls -l /etc/udev/rules.d/*usb*\n ```\n\n## Formatted Output\n\nCreate a nice summary:\n```bash\ncat > /tmp/usb-summary.sh << 'EOF'\n#!/bin/bash\n\necho \"\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\"\necho \"\u2551 USB Devices Summary \u2551\"\necho \"\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\"\necho \"\"\n\necho \"Total USB devices: $(lsusb | wc -l)\"\necho \"\"\n\necho \"--- By Type ---\"\necho \"Input devices: $(lsusb | grep -ciE 'keyboard|mouse|input')\"\necho \"Storage devices: $(lsusb | grep -ciE 'storage|disk|flash')\"\necho \"Audio devices: $(lsusb | grep -ci audio)\"\necho \"Video devices: $(lsusb | grep -ciE 'camera|video')\"\necho \"Network devices: $(lsusb | grep -ciE 'network|ethernet|wifi')\"\necho \"Bluetooth adapters: $(lsusb | grep -ci bluetooth)\"\necho \"\"\n\necho \"--- USB Controllers ---\"\nlspci | grep -i usb\necho \"\"\n\necho \"--- Storage Devices ---\"\nlsblk -o NAME,SIZE,TYPE,MOUNTPOINT | grep -A 10 -E \"^sd|^nvme\"\necho \"\"\n\necho \"--- Recent USB Events ---\"\njournalctl -k --since \"1 hour ago\" | grep -i usb | tail -10\n\nEOF\n\nchmod +x /tmp/usb-summary.sh\nbash /tmp/usb-summary.sh\n```\n\n## Troubleshooting\n\n**Device not detected:**\n```bash\n# Check kernel messages\ndmesg | grep -i usb | tail -20\n\n# Check if ports working\ncat /sys/kernel/debug/usb/devices\n\n# Rescan USB bus\necho 1 | sudo tee /sys/bus/pci/rescan\n```\n\n**Device keeps disconnecting:**\n```bash\n# Disable autosuspend for problematic device\n# Find device path\ndevice_path=$(find /sys/bus/usb/devices/ -name \"DEVICE_NAME*\")\n\n# Disable autosuspend\necho 'on' | sudo tee $device_path/power/control\n```\n\n**Permission denied:**\n```bash\n# Add user to plugdev group\nsudo usermod -aG plugdev $USER\n\n# Create udev rule for device\n# /etc/udev/rules.d/50-mydevice.rules\n# SUBSYSTEM==\"usb\", ATTR{idVendor}==\"1234\", ATTR{idProduct}==\"5678\", MODE=\"0666\"\n```\n\n## Export Device Information\n\n```bash\n# Create detailed report\nusb-devices > ~/usb-devices-$(date +%Y%m%d).txt\n\n# Or with lsusb\nlsusb -v > ~/usb-devices-verbose-$(date +%Y%m%d).txt\n```\n\n## Notes\n\n- USB 1.1: 12 Mbps (Full Speed)\n- USB 2.0: 480 Mbps (High Speed)\n- USB 3.0: 5 Gbps (SuperSpeed)\n- USB 3.1: 10 Gbps (SuperSpeed+)\n- USB 3.2: 20 Gbps\n- USB 4.0: 40 Gbps\n\n- Vendor ID format: `046d` (Logitech), `8086` (Intel), etc.\n- Product ID format: `c52b` (specific device model)\n- USB device path: `/dev/bus/usb/BUS/DEVICE`\n- Use `usbutils` package for lsusb and usb-devices commands\n", "filepath": "peripherals/list-usb-devices.md" } ], "program-management": [ { "name": "install-github-program", "category": "program-management", "description": "", "tags": [], "content": "You are helping Daniel install and set up a program from GitHub.\n\n## Your task\n\n1. **Understand the program**: Ask Daniel for the GitHub repository URL if not already provided\n2. **Determine the category**: Analyze the program's purpose and select the most appropriate category from Daniel's `~/programs` directory structure:\n - `ai-ml`: AI and machine learning applications\n - `communication`: Communication tools\n - `data-testing`: Data testing utilities\n - `design`: Design software\n - `development`: Development tools\n - `media-graphics`: Media and graphics applications\n - `monitoring-iot`: Monitoring and IoT tools\n - `storage-backup`: Storage and backup utilities\n - `system-utilities`: System utilities\n\n3. **Clone the repository**: Clone the GitHub repository to the appropriate subdirectory in `~/programs/[category]/`\n\n4. **Analyze setup requirements**:\n - Check for README, INSTALL, or setup documentation\n - Look for dependency requirements (package.json, requirements.txt, Cargo.toml, etc.)\n - Identify build steps (Makefile, build scripts, etc.)\n\n5. **Install dependencies**: Install any required dependencies using the appropriate package manager:\n - Python: `pip install -r requirements.txt` or `pip install -e .`\n - Node.js: `npm install` or `yarn install`\n - Rust: `cargo build --release`\n - System packages: `sudo apt install [packages]`\n\n6. **Build if necessary**: Run any build commands specified in the documentation\n\n7. **Create symlinks or add to PATH**: If the program has executables:\n - Either create symlinks in `~/.local/bin/` (or `/usr/local/bin/` with sudo)\n - Or document how to add the program to PATH\n\n8. **Test the installation**: Verify the program runs correctly\n\n9. **Document the installation**: Create a brief summary including:\n - Where the program was installed\n - Any configuration steps taken\n - How to run/access the program\n - Any additional setup needed\n\n## Important notes\n\n- Use `gh repo clone` when possible for authenticated GitHub access\n- Preserve the program's directory structure\n- Don't modify the original repository files unless necessary for configuration\n- If unsure about the category, ask Daniel for guidance\n- Always test before declaring success\n", "filepath": "program-management/install-github-program.md" } ], "python": [ { "name": "identify-python-environments", "category": "python", "description": "", "tags": [], "content": "# Python Environment Manager Identification\n\nYou are helping the user identify their system Python installation and all Python environment managers in use.\n\n## Your tasks:\n\n1. **Identify system Python:**\n - System Python version: `python3 --version`\n - System Python location: `which python3`\n - Check if python (unversioned) exists: `which python`\n - Python paths: `python3 -c \"import sys; print(sys.executable)\"`\n - List all Python installations: `which -a python python3 python2`\n\n2. **Check for pyenv:**\n - Check if installed: `which pyenv`\n - If installed:\n - Version: `pyenv --version`\n - Root directory: `echo $PYENV_ROOT` or default `~/.pyenv`\n - Installed Python versions: `pyenv versions`\n - Global Python: `pyenv global`\n - Local Python (if set): `pyenv local`\n - Check if properly initialized in shell: `grep -r \"pyenv init\" ~/.bashrc ~/.zshrc ~/.profile 2>/dev/null`\n\n3. **Check for Conda/Miniconda/Anaconda:**\n - Check if conda is installed: `which conda`\n - If installed:\n - Version: `conda --version`\n - Conda info: `conda info`\n - Base environment location: `echo $CONDA_PREFIX`\n - List environments: `conda env list`\n - Current environment: `echo $CONDA_DEFAULT_ENV`\n - Check initialization: `grep -r \"conda initialize\" ~/.bashrc ~/.zshrc ~/.profile 2>/dev/null`\n\n4. **Check for Mamba:**\n - Check if installed: `which mamba`\n - If installed:\n - Version: `mamba --version`\n - Environments: `mamba env list`\n\n5. **Check for Poetry:**\n - Check if installed: `which poetry`\n - If installed:\n - Version: `poetry --version`\n - Config location: `poetry config --list`\n - Virtual environment settings: `poetry config virtualenvs.path`\n\n6. **Check for pipenv:**\n - Check if installed: `which pipenv`\n - If installed:\n - Version: `pipenv --version`\n - Environment variable settings: `echo $PIPENV_VENV_IN_PROJECT`\n\n7. **Check for virtualenv/venv:**\n - Check if virtualenv is installed: `which virtualenv`\n - Check for virtualenvwrapper: `which virtualenvwrapper.sh`\n - If virtualenvwrapper found:\n - Check workon home: `echo $WORKON_HOME`\n - List environments: `lsvirtualenv` (if available)\n\n8. **Check for other Python version managers:**\n - asdf with Python plugin: `which asdf` and `asdf plugin list | grep python`\n - pythonz: `which pythonz`\n - Check for manual Python installations in common locations:\n - `/usr/local/bin/python*`\n - `/opt/python*`\n - `~/.local/bin/python*`\n\n9. **Analyze pip installations:**\n - System pip: `pip3 --version`\n - Pip location: `which pip3 pip`\n - User site packages: `python3 -m site --user-site`\n - List globally installed packages: `pip3 list --user`\n\n10. **Report summary:**\n - System Python version and location\n - All detected environment managers with versions\n - Which manager is currently active (if any)\n - Any conflicts or issues detected (e.g., multiple managers competing)\n - Recommendations:\n - If no environment manager is detected, suggest installing one (pyenv or conda)\n - If multiple managers are detected, explain their different use cases\n - Suggest best practices for the detected setup\n - Warn about potential PATH conflicts\n\n## Important notes:\n- Don't use sudo for these checks (environment managers are typically user-level)\n- Be clear about which Python is currently active vs. available\n- Explain the difference between system Python and managed versions\n- If shell initialization is missing for detected managers, point that out\n", "filepath": "python/identify-python-environments.md" } ], "repositories": [ { "name": "delete-old-repos", "category": "repositories", "description": "", "tags": [], "content": "This level of my filesystem is where I store github repos.\n\nPlease help me to clean it up a bit.\n\nHere's how I'd like you to help me \"prune\":\n\n- Looking at the last modified date for each folder, identify repositories which I have not edited in over 3 months\n- We can infer that these repos are no longer needed on my local. In the \"worst case\" I can just reclone them; so dont worry about deleting data\n- Delete these repos locally by deleting the folders\n\nIn addition (step two):\n\n- See if you can identify repos that look like they were \"one time projects\". These are repos I may have created for a single time-limited purpose and which I do not need to keep locally.\n\nIf you're not sure about whether a repo should be deleted, even though it seems to meet either rule, just ask.\n", "filepath": "repositories/delete-old-repos.md" }, { "name": "organise-repos", "category": "repositories", "description": "", "tags": [], "content": "This level of my filesystem contains repositories.\n\nI would like you to help me organise the folder.\n\nTo do this:\n\n- Identify common purposes that straddle multiple projects. You can infer the theme of the repository by its name in the filesystem.\n\nThen:\n\n- Organise those repositories into subfolders.\n\nThe thematic subfolders may already exist or you may need to create it. You can determine whether a folder is a repostiry or an organisation folder if you are unsure by inspecting its contents and seeing whether it contains a .git (etc).\n\nTry to strike a balance when organising: we don't want so many organisation folders that they're overly specific and we end up with as many folders as there are repositories. Avoid being overly granular. But achieve a reasonable level of topic clustering.\n\nIf you identify that a number of repos do not fit cleanly into any one of the existing topics (or those you create) you can create a misc folder to hold them.\n", "filepath": "repositories/organise-repos.md" } ], "security": [ { "name": "detect-spyware", "category": "security", "description": "Detect known spyware packages and suggest removal", "tags": [ "security", "spyware", "privacy", "audit", "project", "gitignored" ], "content": "You are helping the user identify any software known to contain spyware or privacy issues.\n\n## Process\n\n1. **Check for known problematic software**\n - Scan installed packages against known spyware list\n - Common categories to check:\n - Browser extensions\n - \"Free\" VPN applications\n - Screen recorders with telemetry\n - System \"optimizers\"\n - Certain proprietary drivers\n\n2. **Check for telemetry in common applications**\n - VS Code vs VSCodium (telemetry difference)\n - Ubuntu's whoopsie (error reporting)\n - Canonical's snapd telemetry\n - Google Chrome vs Chromium\n\n3. **Network activity monitoring**\n - Check for suspicious outbound connections: `sudo netstat -tupn | grep ESTABLISHED`\n - Identify processes making external connections\n - Suggest using `wireshark` or `tcpdump` for deeper analysis\n\n4. **Known spyware patterns to check**\n - Red Star OS components (North Korean)\n - Chinese software with known backdoors\n - Certain \"free\" antivirus software\n - Keyloggers disguised as utilities\n - Browser hijackers\n\n5. **Privacy-concerning legitimate software**\n - Software with excessive telemetry:\n - Ubuntu's apport (crash reporting)\n - popularity-contest\n - Some proprietary drivers\n - Suggest privacy-respecting alternatives\n\n6. **Browser extension audit**\n - Check Chrome/Firefox extension directories\n - Identify extensions with excessive permissions\n - Flag abandoned extensions (security risk)\n\n7. **Suggest privacy-focused alternatives**\n - VS Code \u2192 VSCodium\n - Chrome \u2192 Chromium or Firefox\n - Zoom \u2192 Jitsi\n - Windows telemetry remnants if dual-boot\n\n## Output\n\nProvide a report showing:\n- Any detected spyware (with severity level)\n- Privacy-concerning software with excessive telemetry\n- Suspicious network connections\n- Recommended actions for each finding\n- Privacy-focused alternatives to suggest", "filepath": "security/detect-spyware.md" }, { "name": "probe-vulnerabilities", "category": "security", "description": "Intelligently probe system for security vulnerabilities", "tags": [ "security", "audit", "vulnerabilities", "hardening", "project", "gitignored" ], "content": "You are helping the user identify security vulnerabilities they may wish to remediate.\n\n## Process\n\n1. **System update status**\n - Check for security updates: `apt list --upgradable | grep -i security`\n - Check unattended-upgrades status: `systemctl status unattended-upgrades`\n\n2. **Open ports and services**\n - List listening ports: `sudo ss -tlnp`\n - Identify unnecessary services: `systemctl list-unit-files --state=enabled`\n - Check firewall status: `sudo ufw status verbose`\n\n3. **SSH configuration review**\n - Check `sshd_config` for:\n - PermitRootLogin (should be 'no')\n - PasswordAuthentication (consider disabling)\n - Port (consider non-standard)\n - Check for weak keys: `ssh-keygen -l -f ~/.ssh/id_*.pub`\n\n4. **File permissions audit**\n - Check world-writable files: `find /home -type f -perm -002 2>/dev/null | head -20`\n - Check SUID/SGID binaries: `find / -type f \\( -perm -4000 -o -perm -2000 \\) 2>/dev/null`\n - Review sensitive file permissions: `~/.ssh`, `~/.gnupg`\n\n5. **User and authentication**\n - List users with shell access: `cat /etc/passwd | grep -v nologin | grep -v false`\n - Check password policy: `sudo chage -l $USER`\n - Review sudo configuration: `sudo -l`\n\n6. **Network security**\n - Check for IPv6 if not needed\n - Review DNS settings\n - Check for proxy configurations\n\n7. **Application security**\n - Check for outdated software with known CVEs\n - Review browser security settings\n - Check for auto-updating mechanisms\n\n8. **Suggest security tools**\n - `lynis` - Security auditing tool\n - `rkhunter` - Rootkit scanner\n - `aide` - File integrity checker\n - `fail2ban` - Intrusion prevention\n\n## Output\n\nProvide a security report showing:\n- Critical vulnerabilities (requiring immediate attention)\n- Medium priority issues\n- Low priority recommendations\n- Suggested remediation steps for each issue\n- Security hardening recommendations", "filepath": "security/probe-vulnerabilities.md" } ], "system-health": [ { "name": "optimize-pipewire", "category": "system-health", "description": "Evaluate and optimize PipeWire audio setup", "tags": [ "audio", "pipewire", "optimization", "system", "project", "gitignored" ], "content": "You are helping the user evaluate and optimize their PipeWire audio setup.\n\n## Process\n\n1. **Check PipeWire status**\n - Verify PipeWire is running: `systemctl --user status pipewire pipewire-pulse wireplumber`\n - Check version: `pipewire --version`\n - List audio devices: `pactl list sinks short` and `pactl list sources short`\n\n2. **Evaluate current configuration**\n - Check config files in `~/.config/pipewire/` and `/usr/share/pipewire/`\n - Review sample rate: `pactl info | grep \"Default Sample\"`\n - Check buffer settings and latency\n\n3. **Test audio quality**\n - Check for audio issues: `journalctl --user -u pipewire -n 50`\n - Look for xruns or underruns in logs\n - Test different sample rates if needed\n\n4. **Optimization suggestions**\n - For low latency (music production):\n - Adjust `default.clock.rate` and `default.clock.allowed-rates`\n - Set `default.clock.quantum` (64, 128, 256)\n - Configure `api.alsa.period-size`\n\n - For quality (media playback):\n - Higher sample rates (48000, 96000)\n - Larger buffer sizes\n\n - For Bluetooth:\n - Check codec usage: `pactl list | grep -i codec`\n - Suggest enabling higher quality codecs (LDAC, aptX)\n\n5. **Recommended tools**\n - `pavucontrol` - GUI volume control\n - `helvum` - PipeWire patchbay\n - `qpwgraph` - Qt-based graph manager\n - `easyeffects` - Audio effects for PipeWire\n\n6. **Create optimized config if needed**\n - Offer to create `~/.config/pipewire/pipewire.conf.d/` overrides\n - Suggest settings based on use case\n\n## Output\n\nProvide a report showing:\n- PipeWire status and version\n- Current audio configuration\n- Detected issues (if any)\n- Optimization recommendations\n- Suggested tools to install\n- Configuration changes (if applicable)", "filepath": "system-health/optimize-pipewire.md" }, { "name": "review-startup-services", "category": "system-health", "description": "Review system startup services, identify failed or deprecated services, and clean up boot jobs", "tags": [ "sysadmin", "systemd", "services", "boot", "cleanup", "troubleshooting" ], "content": "Review and clean up system startup services:\n\n1. **Failed Services**: Identify all services that failed to start\n2. **Enabled Services**: List all enabled services that start at boot\n3. **Deprecated Services**: Identify services that may be outdated or unnecessary\n4. **Service Dependencies**: Check for broken dependencies\n5. **Masked Services**: Review masked services\n6. **Timing Analysis**: Identify services that slow down boot\n\nRun the following diagnostic commands:\n\n**Failed and Problematic Services:**\n- `systemctl --failed` to list all failed services\n- `systemctl list-units --state=failed --all` for detailed failed units\n- `systemctl list-units --state=error` for services in error state\n- `systemctl list-units --state=not-found` for services with missing unit files\n\n**Enabled Services:**\n- `systemctl list-unit-files --state=enabled` for all enabled services\n- `systemctl list-units --type=service --state=running` for currently running services\n- `systemctl list-units --type=service --state=active` for active services\n\n**Boot-time Services:**\n- `systemd-analyze blame | head -n 30` for slowest boot services\n- `systemctl list-dependencies --before multi-user.target` for services started before multi-user\n- `systemctl list-dependencies --after multi-user.target` for services started after multi-user\n\n**Service Details for Failed Services:**\nFor each failed service, run:\n- `systemctl status [service-name]` for current status\n- `journalctl -u [service-name] -n 50` for recent logs\n- `systemctl cat [service-name]` to view unit file\n\n**Masked Services:**\n- `systemctl list-unit-files --state=masked` for masked services\n\n**Deprecated/Unnecessary Service Detection:**\n- Check for common deprecated services (networking.service on systemd systems, etc.)\n- Identify services for removed/uninstalled software\n- Find duplicate or redundant services\n\nAnalyze the output and provide:\n\n**Failed Services Report:**\n- List each failed service with its error message\n- Classify the failure:\n - Missing dependencies\n - Configuration errors\n - Service no longer needed\n - Hardware/driver related\n - Permission issues\n\n**Recommendations for each failed service:**\n- **Remove**: Service is deprecated or related to uninstalled software\n - Command: `sudo systemctl disable [service-name]`\n - Command: `sudo systemctl mask [service-name]` if it keeps trying to start\n\n- **Fix**: Service is needed but has configuration issues\n - Provide specific fix based on error logs\n - Command to restart after fix: `sudo systemctl restart [service-name]`\n\n- **Investigate**: Service failure needs deeper investigation\n - Provide relevant log excerpts\n - Suggest diagnostic steps\n\n**Boot Optimization Opportunities:**\n- Services that can be set to start on-demand instead of at boot\n- Services that can be disabled if not needed\n- Commands to disable: `sudo systemctl disable [service-name]`\n- Commands to mask: `sudo systemctl mask [service-name]`\n\n**Enabled Services Review:**\n- List all enabled services\n- Highlight services that may be unnecessary:\n - Services for unused hardware\n - Duplicate services\n - Development/testing services on production systems\n - Legacy services replaced by newer alternatives\n\n**Safety Warnings:**\n- Warn before suggesting removal of critical services\n- List services that should NOT be disabled\n- Suggest creating a snapshot/backup before making changes (especially for BTRFS/Snapper systems)\n\n**Action Plan:**\nProvide a prioritized list of actions:\n1. Safe to disable/mask (services clearly not needed)\n2. Should be fixed (services needed but failing)\n3. Investigate further (unclear if needed or cause of failure)\n\nFor each action, provide the exact commands to execute.\n\n**Post-cleanup:**\nAfter making changes, recommend:\n- `sudo systemctl daemon-reload` to reload systemd configuration\n- `systemd-analyze` to check boot time improvement\n- Review logs after next boot to ensure no new issues", "filepath": "system-health/review-startup-services.md" }, { "name": "system-health-checkup", "category": "system-health", "description": "Comprehensive system health checkup including disk health, SMART status, filesystem checks, and overall system status", "tags": [ "sysadmin", "diagnostics", "health", "disk", "smart", "filesystem", "comprehensive" ], "content": "Perform a comprehensive system health checkup:\n\n1. **Disk Health (SMART)**: Check all disk SMART status and health indicators\n2. **Filesystem Health**: Check all mounted filesystems for errors\n3. **System Resources**: CPU, memory, swap, and load status\n4. **Critical Services**: Verify critical system services are running\n5. **Security Updates**: Check for pending security updates\n6. **Disk Space**: Check all mounted filesystems for space issues\n7. **System Logs**: Check for recent critical errors\n8. **Hardware Errors**: Check for hardware-related issues in logs\n\nRun the following comprehensive diagnostic commands:\n\n**Disk Health (SMART):**\n- `sudo smartctl --scan` to identify all drives\n- `sudo smartctl -H /dev/sda` for health status (repeat for all drives found)\n- `sudo smartctl -A /dev/sda` for SMART attributes (repeat for all drives)\n- Check for: Reallocated sectors, Current pending sectors, Offline uncorrectable sectors\n\n**Filesystem Health:**\n- `df -h` for disk space on all filesystems\n- `sudo btrfs device stats /` if using BTRFS\n- Check mounted filesystems with `mount | grep -E '^/dev'`\n- For ext4: `sudo tune2fs -l /dev/sdXY | grep -i 'state\\|error'` for filesystem state\n\n**System Resources:**\n- `free -h` for memory usage\n- `uptime` for load averages\n- `top -b -n 1 | head -n 20` for process overview\n- `swapon --show` for swap status\n\n**Critical Services:**\n- `systemctl status systemd-journald` for logging service\n- `systemctl status cron` or `systemctl status crond` for task scheduler\n- `systemctl --failed` for any failed services\n\n**Updates and Security:**\n- `sudo apt-get update` to refresh package lists\n- `apt list --upgradable` to check for available updates\n- `grep -i security /var/log/apt/history.log | tail -n 20` for recent security updates\n\n**System Logs:**\n- `journalctl -p 3 -b` for errors in current boot\n- `journalctl -p 2 -b` for critical issues in current boot\n- `dmesg | grep -i 'error\\|fail\\|critical' | tail -n 20` for kernel errors\n\n**Hardware Status:**\n- `sensors` for temperature monitoring (if lm-sensors installed)\n- `dmesg | grep -i 'hardware error'` for hardware errors\n- `lspci -v | grep -i 'error'` for PCIe errors\n\n**Additional Checks:**\n- Check for excessive failed login attempts: `sudo grep -i 'failed password' /var/log/auth.log | tail -n 10`\n- Check for disk I/O errors: `dmesg | grep -i 'I/O error'`\n\nAnalyze all results and provide:\n\n**Summary Report:**\n- Overall system health status (Healthy, Warning, Critical)\n- Disk health status for each drive\n- Filesystem health and space status\n- Memory and swap status\n- Any failed services or critical errors\n- Pending updates (especially security)\n- Temperature warnings if applicable\n- Specific issues found with severity levels\n\n**Recommendations:**\n- Immediate actions needed (if any)\n- Preventive maintenance suggestions\n- Monitoring recommendations\n- Whether a reboot is recommended\n- Backup reminders if issues detected\n\n**Priority Issues:**\nList any issues in order of urgency:\n1. Critical (requires immediate attention)\n2. Warning (should be addressed soon)\n3. Informational (for awareness)\n\nIf smartmontools is not installed, offer to install with `sudo apt-get install smartmontools`.\nIf lm-sensors is not installed and temperature monitoring is desired, offer to install with `sudo apt-get install lm-sensors`.", "filepath": "system-health/system-health-checkup.md" }, { "name": "system-upgrade", "category": "system-health", "description": "Perform a full system upgrade with apt-get (updates package lists and upgrades all packages)", "tags": [ "sysadmin", "maintenance", "apt", "upgrade", "system" ], "content": "Perform a comprehensive system upgrade:\n\n1. Update the package lists from repositories\n2. Upgrade all installed packages to their latest versions\n3. Show which packages were upgraded\n4. Clean up any unnecessary packages\n\nUse sudo to execute these commands with appropriate privileges.\n\nRun the following commands sequentially:\n- `sudo apt-get update` to refresh package lists\n- `sudo apt-get upgrade -y` to upgrade packages\n- `sudo apt-get autoremove -y` to remove unnecessary packages\n- `sudo apt-get autoclean` to clean up package cache\n\nProvide a summary of what was updated and if a reboot is recommended.", "filepath": "system-health/system-upgrade.md" } ], "utilities": [ { "name": "diagnose-printers", "category": "utilities", "description": "Diagnose installed printers and suggest removal of unused ones", "tags": [ "system", "printers", "cups", "cleanup", "project", "gitignored" ], "content": "You are helping the user review installed printers and identify ones that can be removed.\n\n## Process\n\n1. **Check CUPS status**\n - Verify CUPS is running: `systemctl status cups`\n - Access CUPS web interface info: check `http://localhost:631`\n\n2. **List configured printers**\n - Run: `lpstat -p -d`\n - Show detailed info: `lpstat -l -p`\n - List printer queues: `lpq -a`\n\n3. **Check printer usage**\n - View printer job history if available\n - Check `/var/log/cups/page_log` for usage patterns\n - Identify printers with no recent jobs\n\n4. **Identify printer drivers**\n - List installed printer drivers: `lpinfo -m | grep -i `\n - Check for unnecessary driver packages: `dpkg -l | grep -E \"printer|cups|hplip\"`\n\n5. **Test printer connectivity**\n - For network printers, ping their IPs\n - Check if printers are still on the network\n - Test print to each printer: `lp -d /etc/hosts`\n\n6. **Suggest removals**\n - Old/disconnected printers\n - Duplicate printer entries\n - Printers user no longer has access to\n - Unnecessary drivers\n\n7. **Cleanup commands**\n - Remove printer: `lpadmin -x `\n - Remove unused drivers: `apt remove `\n - Clean print queue: `cancel -a `\n - Disable CUPS if no printers needed: `sudo systemctl disable cups`\n\n## Output\n\nProvide a report showing:\n- List of configured printers with status\n- Last usage date (if available)\n- Network connectivity status\n- Installed printer drivers\n- Recommendations for removal\n- Cleanup commands\n- Potential space savings", "filepath": "utilities/diagnose-printers.md" } ], "virtualization": [ { "name": "check-virtualization", "category": "virtualization", "description": "", "tags": [], "content": "# Check Virtualization Setup\n\nYou are helping the user check if the system is properly set up to run virtualized workloads and remediate any issues.\n\n## Your tasks:\n\n1. **Check if CPU supports virtualization:**\n\n **Intel (VT-x):**\n ```bash\n grep -E \"vmx\" /proc/cpuinfo\n ```\n\n **AMD (AMD-V):**\n ```bash\n grep -E \"svm\" /proc/cpuinfo\n ```\n\n If no output, virtualization is not supported or not enabled in BIOS.\n\n2. **Check if virtualization is enabled in BIOS:**\n ```bash\n sudo apt install cpu-checker\n sudo kvm-ok\n ```\n\n If it says KVM can be used, virtualization is enabled.\n If not, user needs to enable it in BIOS/UEFI.\n\n3. **Check current virtualization software:**\n\n **KVM/QEMU:**\n ```bash\n which qemu-system-x86_64\n lsmod | grep kvm\n ```\n\n **VirtualBox:**\n ```bash\n which virtualbox\n VBoxManage --version\n ```\n\n **VMware:**\n ```bash\n which vmware\n systemctl status vmware\n ```\n\n **Docker (containerization):**\n ```bash\n docker --version\n systemctl status docker\n ```\n\n4. **Check KVM kernel modules:**\n ```bash\n lsmod | grep kvm\n ```\n\n Should show:\n - `kvm_intel` (for Intel)\n - `kvm_amd` (for AMD)\n - `kvm` (base module)\n\n If not loaded, try:\n ```bash\n sudo modprobe kvm\n sudo modprobe kvm_intel # or kvm_amd\n ```\n\n5. **Install KVM and related tools (if not installed):**\n ```bash\n sudo apt update\n sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager\n ```\n\n6. **Check libvirt status:**\n ```bash\n sudo systemctl status libvirtd\n ```\n\n If not running:\n ```bash\n sudo systemctl enable libvirtd\n sudo systemctl start libvirtd\n ```\n\n7. **Add user to required groups:**\n ```bash\n sudo usermod -aG libvirt $USER\n sudo usermod -aG kvm $USER\n ```\n\n User needs to log out and back in for group changes to take effect.\n\n8. **Verify user permissions:**\n ```bash\n groups\n ```\n\n Should include: `libvirt` and `kvm`\n\n9. **Check libvirt connectivity:**\n ```bash\n virsh list --all\n ```\n\n If permission denied, user is not in libvirt group or not logged back in.\n\n10. **Check virtualization networking:**\n\n **Default network:**\n ```bash\n virsh net-list --all\n ```\n\n If default network is not active:\n ```bash\n virsh net-start default\n virsh net-autostart default\n ```\n\n **Bridge networking:**\n ```bash\n ip link show\n brctl show # if bridge-utils installed\n ```\n\n11. **Check nested virtualization (if needed):**\n\n **For Intel:**\n ```bash\n cat /sys/module/kvm_intel/parameters/nested\n ```\n\n **For AMD:**\n ```bash\n cat /sys/module/kvm_amd/parameters/nested\n ```\n\n If shows `N` or `0`, nested virtualization is disabled.\n\n To enable:\n ```bash\n echo \"options kvm_intel nested=1\" | sudo tee /etc/modprobe.d/kvm-intel.conf\n # or for AMD:\n echo \"options kvm_amd nested=1\" | sudo tee /etc/modprobe.d/kvm-amd.conf\n ```\n\n Then reload:\n ```bash\n sudo modprobe -r kvm_intel\n sudo modprobe kvm_intel\n ```\n\n12. **Check IOMMU for PCIe passthrough (if needed):**\n ```bash\n dmesg | grep -i iommu\n ```\n\n If IOMMU is needed, add to kernel parameters in `/etc/default/grub`:\n ```\n GRUB_CMDLINE_LINUX_DEFAULT=\"quiet splash intel_iommu=on\"\n # or for AMD:\n GRUB_CMDLINE_LINUX_DEFAULT=\"quiet splash amd_iommu=on\"\n ```\n\n Then update grub:\n ```bash\n sudo update-grub\n sudo reboot\n ```\n\n13. **Check available storage pools:**\n ```bash\n virsh pool-list --all\n ```\n\n Create default pool if needed:\n ```bash\n virsh pool-define-as default dir --target /var/lib/libvirt/images\n virsh pool-start default\n virsh pool-autostart default\n ```\n\n14. **Check system resources for virtualization:**\n ```bash\n free -h\n df -h /var/lib/libvirt/images\n cat /proc/cpuinfo | grep \"processor\" | wc -l\n ```\n\n Recommendations:\n - At least 4GB RAM for light VMs\n - At least 20GB free disk space\n - Multiple CPU cores recommended\n\n15. **Test VM creation (small test):**\n ```bash\n virt-install --name test-vm \\\n --ram 512 \\\n --disk size=1 \\\n --cdrom /path/to/iso \\\n --graphics vnc \\\n --check all=off \\\n --dry-run\n ```\n\n16. **Check for conflicting virtualization:**\n VirtualBox and KVM can sometimes conflict. Check if both are installed:\n ```bash\n dpkg -l | grep -E \"virtualbox|qemu-kvm\"\n ```\n\n VirtualBox kernel modules can conflict with KVM:\n ```bash\n lsmod | grep vbox\n ```\n\n17. **Check virtualization acceleration:**\n ```bash\n ls -l /dev/kvm\n ```\n\n Should be:\n ```\n crw-rw---- 1 root kvm /dev/kvm\n ```\n\n18. **Install virt-manager (GUI) if desired:**\n ```bash\n sudo apt install virt-manager\n ```\n\n Test launch:\n ```bash\n virt-manager\n ```\n\n19. **Check for Secure Boot issues:**\n Secure Boot can prevent some virtualization modules from loading:\n ```bash\n mokutil --sb-state\n ```\n\n If Secure Boot is enabled and causing issues, user may need to:\n - Sign modules\n - Disable Secure Boot in BIOS\n - Use signed versions\n\n20. **Performance tuning:**\n\n **Enable hugepages for better performance:**\n ```bash\n sudo sysctl vm.nr_hugepages=1024\n echo \"vm.nr_hugepages=1024\" | sudo tee -a /etc/sysctl.conf\n ```\n\n **Check CPU governor:**\n ```bash\n cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor\n ```\n\n For virtualization, `performance` governor is recommended:\n ```bash\n sudo apt install cpufrequtils\n sudo cpufreq-set -g performance\n ```\n\n21. **Report findings:**\n Summarize:\n - CPU virtualization support status\n - BIOS/UEFI virtualization enabled status\n - KVM modules loaded status\n - libvirt status\n - User group membership\n - Network configuration\n - Nested virtualization status\n - Storage pools status\n - Available resources\n - Any conflicts or issues\n - Recommendations\n\n22. **Provide recommendations:**\n - Enable VT-x/AMD-V in BIOS if not enabled\n - Install KVM/QEMU if not present\n - Add user to libvirt and kvm groups\n - Set up default network\n - Enable nested virtualization if needed\n - Configure IOMMU for PCIe passthrough if needed\n - Install virt-manager for GUI management\n - Allocate sufficient resources\n - Resolve any conflicts (VirtualBox vs KVM)\n - Performance tuning suggestions\n\n23. **Basic virtualization commands to share:**\n - `virsh list --all` - List all VMs\n - `virsh start ` - Start a VM\n - `virsh shutdown ` - Shutdown a VM\n - `virsh destroy ` - Force stop a VM\n - `virsh console ` - Connect to VM console\n - `virsh net-list` - List networks\n - `virsh pool-list` - List storage pools\n - `virt-manager` - Launch GUI\n - `virt-install` - Create new VM from command line\n\n## Important notes:\n- Virtualization must be enabled in BIOS/UEFI\n- User must be in kvm and libvirt groups\n- Log out and back in after adding to groups\n- VirtualBox and KVM can conflict\n- Nested virtualization is disabled by default\n- IOMMU required for PCIe passthrough\n- Secure Boot may prevent module loading\n- Sufficient RAM and disk space needed\n- Performance governor recommended for VMs\n- Check if system is itself a VM before enabling nested virtualization\n", "filepath": "virtualization/check-virtualization.md" } ], "comfyui": [ { "name": "setup-comfyui", "category": "comfyui", "description": "Set up ComfyUI for AI image generation", "tags": [ "ai", "ml", "comfyui", "image-generation", "setup", "project", "gitignored" ], "content": "You are helping the user set up ComfyUI for AI image generation.\n\n## Process\n\n1. **Check if ComfyUI is already installed**\n - Check in `~/programs/ai-ml/ComfyUI` (Daniel's typical location)\n - Look for existing installation\n\n2. **Install prerequisites**\n - Python 3.10+ (check: `python3 --version`)\n - Git (check: `git --version`)\n - For AMD GPU (ROCm):\n - Ensure ROCm is installed: `rocminfo`\n - PyTorch with ROCm support needed\n\n3. **Clone ComfyUI repository**\n - Navigate to: `cd ~/programs/ai-ml/`\n - Clone: `git clone https://github.com/comfyanonymous/ComfyUI.git`\n - Enter directory: `cd ComfyUI`\n\n4. **Set up Python environment**\n - Create venv: `python3 -m venv venv`\n - Activate: `source venv/bin/activate`\n - Upgrade pip: `pip install --upgrade pip`\n\n5. **Install dependencies**\n - For AMD GPU (ROCm):\n ```bash\n pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0\n ```\n - Install ComfyUI requirements: `pip install -r requirements.txt`\n\n6. **Download initial models**\n - Create model directories if needed\n - Suggest downloading a base model (SD 1.5 or SDXL):\n - Models go in: `ComfyUI/models/checkpoints/`\n - VAE in: `ComfyUI/models/vae/`\n - LoRAs in: `ComfyUI/models/loras/`\n - Suggest civitai.com or huggingface.co for models\n\n7. **Test ComfyUI**\n - Run: `python main.py`\n - Should start on `http://127.0.0.1:8188`\n - Check logs for GPU detection\n\n8. **Create launch script**\n - Offer to create `~/programs/ai-ml/ComfyUI/run_comfyui.sh`:\n ```bash\n #!/bin/bash\n cd ~/programs/ai-ml/ComfyUI\n source venv/bin/activate\n python main.py\n ```\n - Make executable: `chmod +x run_comfyui.sh`\n\n9. **Suggest useful custom nodes**\n - ComfyUI Manager (for easy node installation)\n - ControlNet nodes\n - Ultimate SD Upscale\n - Efficiency nodes\n\n## Output\n\nProvide a summary showing:\n- Installation status\n- GPU detection status\n- Model directory locations\n- How to launch ComfyUI\n- Recommended next steps (model downloads, custom nodes)\n- Troubleshooting tips for AMD GPU", "filepath": "ai-tools/comfyui/setup-comfyui.md" } ], "ollama": [ { "name": "prune-ollama", "category": "ollama", "description": "", "tags": [], "content": "Review the ollama models that are installed on this computer.\n\nYour task is to prune those that I do not need to retain (but before doing so, confirm).\n\nPruning logic:\n\nconsider for pruning:\n\n- Models that serve almost the same purpose and for which there is no compelling reason to keep both. Ask which I'd like to keep or suggest for deletion the slightly inferior of the two. \n- Models that are deprecated \n\nFor example:\n\n- If you were to find llama 3. 1 8B and llama 3.2 8B, you would reasonably suggest that I consider pruning 3.1 as it's an older model and both are quantized. ", "filepath": "ai-tools/ollama/prune-ollama.md" }, { "name": "setup-ollama", "category": "ollama", "description": "Set up Ollama on the machine for local LLM inference", "tags": [ "ai", "ml", "ollama", "llm", "setup", "project", "gitignored" ], "content": "You are helping the user set up Ollama for local LLM inference.\n\n## Process\n\n1. **Check if Ollama is already installed**\n - Run: `ollama --version`\n - Check if service is running: `systemctl status ollama` or `sudo systemctl status ollama`\n\n2. **Install Ollama if needed**\n - Download and install: `curl -fsSL https://ollama.com/install.sh | sh`\n - Or manual install from https://ollama.com/download\n - Verify installation: `ollama --version`\n\n3. **Start Ollama service**\n - Start service: `systemctl start ollama` or `sudo systemctl start ollama`\n - Enable on boot: `systemctl enable ollama` or `sudo systemctl enable ollama`\n - Check status: `systemctl status ollama`\n\n4. **Verify GPU support (for AMD on Daniel's system)**\n - Check if ROCm is detected: `rocm-smi` or `rocminfo`\n - Ollama should auto-detect AMD GPU\n - Check Ollama logs for GPU recognition: `journalctl -u ollama -n 50`\n\n5. **Configure Ollama**\n - Check default model storage: `~/.ollama/models`\n - Environment variables (if needed):\n - `OLLAMA_HOST` - change port/binding\n - `OLLAMA_MODELS` - custom model directory\n - `OLLAMA_NUM_PARALLEL` - parallel requests\n - Edit systemd service if needed: `/etc/systemd/system/ollama.service`\n\n6. **Test Ollama**\n - Pull a test model: `ollama pull llama2` (or smaller: `ollama pull tinyllama`)\n - Run a test: `ollama run tinyllama \"Hello, how are you?\"`\n - Verify GPU usage during inference\n\n7. **Suggest initial models**\n - Based on Daniel's hardware (AMD GPU), suggest:\n - General: llama3.2, qwen2.5\n - Code: codellama, deepseek-coder\n - Fast: tinyllama, phi\n - Vision: llava, bakllava\n\n## Output\n\nProvide a summary showing:\n- Ollama installation status and version\n- Service status\n- GPU detection status\n- Default configuration\n- Recommended models to pull\n- Next steps for usage", "filepath": "ai-tools/ollama/setup-ollama.md" }, { "name": "suggest-ollama-models", "category": "ollama", "description": "Review installed Ollama models and suggest others based on hardware", "tags": [ "ai", "ml", "ollama", "models", "recommendations", "project", "gitignored" ], "content": "You are helping the user review their Ollama models and suggest new ones based on their hardware.\n\n## Process\n\n1. **Check currently installed models**\n - Run: `ollama list`\n - Show model sizes and last modified dates\n - Calculate total disk usage\n\n2. **Assess hardware capabilities**\n - Check GPU VRAM: `rocm-smi` (for AMD) or `nvidia-smi` (for NVIDIA)\n - Check system RAM: `free -h`\n - Determine recommended model sizes:\n - < 8GB VRAM: 7B models and smaller\n - 8-16GB VRAM: up to 13B models\n - 16-24GB VRAM: up to 34B models\n - 24GB+ VRAM: 70B+ models possible\n\n3. **Identify user's needs**\n - Ask about use cases:\n - General chat\n - Code generation\n - Data analysis\n - Creative writing\n - Vision/multimodal\n - Specialized domains\n\n4. **Suggest models by category**\n\n **General Purpose:**\n - llama3.2 (3B, 8B)\n - qwen2.5 (7B, 14B, 32B)\n - mistral (7B)\n - gemma2 (9B, 27B)\n\n **Code:**\n - codellama (7B, 13B, 34B)\n - deepseek-coder (6.7B, 33B)\n - starcoder2 (7B, 15B)\n\n **Fast/Small:**\n - tinyllama (1.1B)\n - phi3 (3.8B)\n\n **Multimodal:**\n - llava (7B, 13B, 34B)\n - bakllava (7B)\n\n **Specialized:**\n - meditron (medical)\n - sqlcoder (SQL generation)\n - wizardmath (mathematics)\n\n5. **Consider quantization levels**\n - Explain different quants (Q4, Q5, Q8, etc.)\n - Suggest appropriate quant for their VRAM\n\n6. **Cleanup suggestions**\n - Identify duplicate models\n - Suggest removing unused models: `ollama rm `\n - Free up space for new models\n\n## Output\n\nProvide a report showing:\n- Currently installed models and total size\n- Hardware capacity summary\n- Recommended models based on:\n - Available VRAM\n - User's use cases\n - Current gaps in model coverage\n- Commands to install suggested models\n- Models that could be removed to save space", "filepath": "ai-tools/ollama/suggest-ollama-models.md" } ], "mcp": [ { "name": "manage-mcp-servers", "category": "mcp", "description": "Review installed MCP servers and work with user to add new ones", "tags": [ "mcp", "ai", "configuration", "servers", "project", "gitignored" ], "content": "You are helping the user manage their MCP (Model Context Protocol) servers.\n\n## Process\n\n1. **Check MCP configuration**\n - Look for MCP config: `~/mcp/` or `~/.config/mcp/`\n - Check Claude Code config: `~/.config/claude/`\n - Identify MCP server config files\n\n2. **List currently installed MCP servers**\n - Parse configuration files\n - For each server, show:\n - Server name\n - Server type/purpose\n - Status (running/stopped)\n - Configuration details\n\n3. **Check running MCP servers**\n - Look for running processes:\n ```bash\n ps aux | grep mcp\n ```\n - Check if servers are accessible\n\n4. **Suggest useful MCP servers**\n\n **Common MCP servers:**\n - Filesystem MCP (file operations)\n - GitHub MCP (GitHub integration)\n - Database MCP (PostgreSQL, SQLite, etc.)\n - Browser MCP (web automation)\n - Context7 MCP (documentation)\n - Memory MCP (persistent memory)\n - Search MCP (web search)\n\n5. **Install new MCP servers**\n - For each server user wants:\n - Check installation method (npm, pip, docker, etc.)\n - Install dependencies\n - Configure server\n - Add to MCP config\n\n **Example: Installing filesystem MCP**\n ```bash\n npm install -g @anthropic/mcp-server-filesystem\n ```\n\n **Example: Installing custom MCP server**\n ```bash\n git clone \n cd \n npm install\n ```\n\n6. **Configure MCP servers**\n - Add server to config file\n - Example config:\n ```json\n {\n \"mcpServers\": {\n \"filesystem\": {\n \"command\": \"mcp-server-filesystem\",\n \"args\": [\"/path/to/allowed/directory\"]\n },\n \"github\": {\n \"command\": \"mcp-server-github\",\n \"env\": {\n \"GITHUB_TOKEN\": \"your-token-here\"\n }\n }\n }\n }\n ```\n\n7. **Test MCP server connectivity**\n - Start servers\n - Verify they're accessible by Claude Code\n - Test basic operations\n\n8. **Document MCP setup**\n - Offer to create `~/mcp/README.md` documenting:\n - Installed servers\n - Configuration\n - Usage examples\n - Troubleshooting\n\n9. **Suggest workflows**\n - Recommend MCP server combinations for common tasks\n - Show example use cases\n\n## Output\n\nProvide a summary showing:\n- Currently installed MCP servers\n- Server status and configuration\n- New servers installed (if any)\n- Configuration changes made\n- Usage examples\n- Next steps or recommendations", "filepath": "configuration/mcp/manage-mcp-servers.md" } ], "node": [ { "name": "node-version-check", "category": "node", "description": "", "tags": [], "content": "Check which version of node I have installed on this computer.\n", "filepath": "dev-tools/node/node-version-check.md" }, { "name": "npm-install", "category": "node", "description": "", "tags": [], "content": "Install Node Package Manager (npm).\n", "filepath": "dev-tools/node/npm-install.md" } ], "yadm": [ { "name": "check-yadm", "category": "yadm", "description": "", "tags": [], "content": "# Check YADM Status\n\nYou are helping the user check the status of their YADM (Yet Another Dotfiles Manager) repository.\n\n## Task\n\n1. Check which files YADM is currently tracking:\n ```bash\n yadm list -a\n ```\n\n2. Show the current repository status (modified, staged, untracked files):\n ```bash\n yadm status\n ```\n\n3. Show recent commit history (last 10 commits):\n ```bash\n yadm log --oneline -10\n ```\n\n4. Check if there are any unpushed commits:\n ```bash\n yadm log origin/main..HEAD --oneline\n ```\n (Note: Replace 'main' with the actual branch name if different, e.g., 'master')\n\n5. Show which remote repository YADM is connected to:\n ```bash\n yadm remote -v\n ```\n\n6. Summarize the findings for the user:\n - Total number of tracked files\n - Any uncommitted changes\n - Any unpushed commits\n - Current branch\n - Remote repository location\n\n## Additional Checks (Optional)\n\nIf requested, you can also:\n- Show files that have been modified: `yadm diff --name-only`\n- Check for files that should be tracked but aren't: Look for common dotfiles in the home directory\n- Verify the YADM repository is healthy: `yadm fsck`\n\n## Notes\n\n- YADM is a wrapper around git specifically for managing dotfiles\n- All git commands work with yadm by replacing 'git' with 'yadm'\n- The YADM repository is stored in `~/.local/share/yadm/repo.git`\n", "filepath": "dev-tools/yadm/check-yadm.md" }, { "name": "manually-update-yadm", "category": "yadm", "description": "", "tags": [], "content": "# Manually Update YADM\n\nYou are helping the user manually update their YADM (Yet Another Dotfiles Manager) repository.\n\n## Task\n\n1. First, check the current status of the YADM repository:\n ```bash\n yadm status\n ```\n\n2. Show the user what files have been modified:\n ```bash\n yadm diff\n ```\n\n3. Ask the user which files they want to add to the commit. If they want to add all changes, use:\n ```bash\n yadm add -u\n ```\n\n Or for specific files:\n ```bash\n yadm add ...\n ```\n\n4. Show the staged changes:\n ```bash\n yadm status\n ```\n\n5. Ask the user for a commit message, then create the commit:\n ```bash\n yadm commit -m \"user's commit message\"\n ```\n\n6. Push the changes to the remote repository:\n ```bash\n yadm push\n ```\n\n7. Confirm the update was successful and show the final status.\n\n## Notes\n\n- YADM works exactly like git but is specifically for dotfiles\n- Be careful not to commit sensitive information like API keys or passwords\n- If the user has a pre-commit hook or other git hooks configured, they will run automatically\n- If there are conflicts or issues, guide the user through resolving them\n", "filepath": "dev-tools/yadm/manually-update-yadm.md" } ], "chunking": [ { "name": "chunk-this-dir", "category": "chunking", "description": "", "tags": [], "content": "This folder contains a lot of files.\n\nPlease chunk them by creating numeric subfolders each containing a specific number of files.\n\nI will provide the desired number.\n", "filepath": "filesystem-organization/chunking/chunk-this-dir.md" } ], "tidy-up": [ { "name": "desktop-tidy", "category": "tidy-up", "description": "", "tags": [], "content": "Recurse to my desktop (~/Desktop)\n\nHelp me clean it up!\n\nSee what non-launcher files I have\n\nCreate filetype folders like audio and move files of that association into those subfolders.\n\nIf you think you can see files which are probably no longer needed, ask me if they can be deleted\n", "filepath": "filesystem-organization/tidy-up/desktop-tidy.md" }, { "name": "organize-loose-files", "category": "tidy-up", "description": "", "tags": [], "content": "You are helping Daniel organize loose files in a directory into a logical folder structure.\n\n## Your Task\n\n1. **Analyze the current directory**: List all files (not subdirectories) in the current working directory\n2. **Categorize files**: Group files by:\n - File type/extension\n - Common naming patterns\n - Apparent purpose or content (infer from filenames)\n3. **Propose folder structure**: Suggest a logical folder organization scheme based on what you find\n4. **Present the plan**: Show Daniel:\n - How many files will go into each proposed folder\n - The proposed folder names\n - A few example files that would go into each folder\n5. **Ask for confirmation**: Get Daniel's approval before making any changes\n6. **Execute if approved**: Create the folders and move files accordingly\n\n## Guidelines\n\n- Be intelligent about categorization - don't just group by file extension\n- Look for patterns in filenames (dates, projects, categories)\n- Suggest meaningful folder names\n- If there are many files of the same type doing different things, subcategorize them\n- Always preserve file names when moving\n- Create a summary report after completion showing what was organized\n\n## Safety\n\n- NEVER delete files\n- NEVER overwrite existing files\n- If a filename conflict occurs during move, append a number suffix\n- Ask before proceeding with the actual file moves\n", "filepath": "filesystem-organization/tidy-up/organize-loose-files.md" } ], "hardware-profilers": [ { "name": "hardware-identity", "category": "hardware-profilers", "description": "", "tags": [], "content": "You are identifying basic hardware information including manufacturer, model, and serial numbers.\n\n## Your Task\n\nExtract and display system identification information:\n\n### 1. System Identity\n- **Manufacturer**: System/chassis manufacturer\n- **Product name**: System model/product name\n- **Serial number**: System serial number\n- **UUID**: System UUID\n- **SKU**: Stock keeping unit number (if available)\n\n### 2. Motherboard Identity\n- **Manufacturer**: Board manufacturer\n- **Product name**: Board model\n- **Serial number**: Board serial number\n- **Version**: Board version/revision\n\n### 3. BIOS/UEFI Identity\n- **Vendor**: BIOS manufacturer\n- **Version**: BIOS version\n- **Release date**: BIOS release date\n- **Revision**: Firmware revision\n\n### 4. Chassis Identity\n- **Manufacturer**: Chassis manufacturer\n- **Type**: Chassis type (desktop, laptop, tower, etc.)\n- **Serial number**: Chassis serial number\n- **Asset tag**: Asset tag (if configured)\n\n## Commands to Use\n\n**Primary identification:**\n- `sudo dmidecode -t system`\n- `sudo dmidecode -t baseboard`\n- `sudo dmidecode -t bios`\n- `sudo dmidecode -t chassis`\n\n**Additional information:**\n- `hostnamectl` - System hostname and other details\n- `cat /sys/class/dmi/id/product_name`\n- `cat /sys/class/dmi/id/sys_vendor`\n- `cat /sys/class/dmi/id/board_vendor`\n- `cat /sys/class/dmi/id/bios_version`\n\n**Hardware summary:**\n- `sudo lshw -short` - Quick hardware overview\n- `inxi -M` - Machine data (if available)\n\n## Output Format\n\nPresent a clean identification card format:\n\n```\n=============================================================================\n HARDWARE IDENTIFICATION\n=============================================================================\n\nSYSTEM INFORMATION\n------------------\nManufacturer: [vendor]\nProduct Name: [model]\nSerial Number: [S/N]\nUUID: [uuid]\nSKU Number: [sku]\n\nMOTHERBOARD INFORMATION\n-----------------------\nManufacturer: [vendor]\nProduct Name: [model]\nVersion: [version]\nSerial Number: [S/N]\n\nBIOS/UEFI INFORMATION\n---------------------\nVendor: [vendor]\nVersion: [version]\nRelease Date: [date]\nFirmware Revision: [revision]\n\nCHASSIS INFORMATION\n-------------------\nManufacturer: [vendor]\nType: [type]\nSerial Number: [S/N]\nAsset Tag: [tag]\n\n=============================================================================\n```\n\n### JSON Format (AI-Readable)\n\n```json\n{\n \"system\": {\n \"manufacturer\": \"\",\n \"product_name\": \"\",\n \"serial_number\": \"\",\n \"uuid\": \"\",\n \"sku\": \"\"\n },\n \"motherboard\": {\n \"manufacturer\": \"\",\n \"product_name\": \"\",\n \"version\": \"\",\n \"serial_number\": \"\"\n },\n \"bios\": {\n \"vendor\": \"\",\n \"version\": \"\",\n \"release_date\": \"\",\n \"revision\": \"\"\n },\n \"chassis\": {\n \"manufacturer\": \"\",\n \"type\": \"\",\n \"serial_number\": \"\",\n \"asset_tag\": \"\"\n }\n}\n```\n\n## Execution Guidelines\n\n1. **Use sudo**: dmidecode requires root privileges\n2. **Handle missing data**: Some fields may be unavailable or say \"Not Specified\"\n3. **Privacy consideration**: Serial numbers are sensitive - note if this is for sharing\n4. **Validate output**: Cross-check using multiple methods\n5. **Format cleanly**: Align fields for easy reading\n\n## Important Notes\n\n- Virtual machines may show generic or missing hardware IDs\n- Some manufacturers don't populate all DMI fields\n- Serial numbers should be handled with care for security/privacy\n- Asset tags are typically only set in enterprise environments\n\nBe concise and present only the identification information requested.\n", "filepath": "hardware/hardware-profilers/hardware-identity.md" }, { "name": "hardware-profile", "category": "hardware-profilers", "description": "", "tags": [], "content": "You are creating a comprehensive hardware profile of the system that is both AI-readable and human-readable.\n\n## Your Task\n\nGenerate a detailed hardware summary by systematically profiling the following components:\n\n### 1. CPU Profile\n- **Model and specifications** using `lscpu`\n- **Architecture details**: cores, threads, cache sizes\n- **CPU frequency**: current, min, max\n- **Virtualization support**: VT-x/AMD-V capabilities\n- **CPU vulnerabilities**: Spectre, Meltdown, etc.\n- **Performance governor** settings\n\n### 2. Memory Profile\n- **Total RAM** using `free -h` and `dmidecode -t memory`\n- **Memory type and speed**: DDR3/DDR4/DDR5, frequency\n- **Number of modules** and configuration (slots used/available)\n- **Swap configuration**: size, type (partition/file)\n- **Current usage** and available memory\n\n### 3. Storage Profile\n- **All storage devices** using `lsblk`, `fdisk -l`, and `smartctl`\n- **Drive types**: NVMe, SSD, HDD, eMMC\n- **Capacity and usage** for each device\n- **Partition layout** and filesystem types\n- **SMART health status** for drives that support it\n- **Mount points** and usage percentages\n\n### 4. Graphics Profile\n- **GPU information** using `lspci | grep -i vga`, `lshw -C display`\n- **GPU vendor and model**: NVIDIA, AMD, Intel\n- **Driver information**: version and type (proprietary/open-source)\n- **Display connections** and active monitors\n- **VRAM** capacity (if available)\n- **Vulkan/OpenGL support** using `vulkaninfo` and `glxinfo` if available\n\n### 5. Network Profile\n- **Network interfaces** using `ip addr` and `lshw -C network`\n- **Interface types**: Ethernet, WiFi, virtual\n- **MAC addresses** for physical interfaces\n- **Link speeds** and duplex settings\n- **Wireless capabilities**: protocols supported (802.11ac/ax, etc.)\n- **Active connections** and IP configuration\n\n### 6. System Board and Firmware\n- **Motherboard details** using `dmidecode -t baseboard`\n- **BIOS/UEFI information**: vendor, version, release date\n- **System manufacturer and model**\n- **Serial numbers** (if accessible and relevant)\n- **Firmware capabilities**: UEFI features, secure boot status\n\n### 7. Peripherals and Devices\n- **USB devices** using `lsusb`\n- **PCI devices** using `lspci`\n- **Audio devices** using `aplay -l` and `lshw -C sound`\n- **Input devices**: keyboards, mice, touchpads\n- **Connected storage**: external drives, card readers\n\n### 8. Thermal and Power\n- **Temperature sensors** using `sensors` (if lm-sensors installed)\n- **Fan speeds** and thermal zones\n- **Battery information** (for laptops) using `upower -i /org/freedesktop/UPower/devices/battery_BAT0`\n- **Power management** settings and capabilities\n\n## Commands to Use\n\n**System Overview:**\n- `inxi -Fxz` - Comprehensive system information\n- `hwinfo --short` - Hardware summary\n\n**CPU:**\n- `lscpu`\n- `cat /proc/cpuinfo`\n- `cpufreq-info` (if available)\n\n**Memory:**\n- `free -h`\n- `sudo dmidecode -t memory`\n- `cat /proc/meminfo`\n\n**Storage:**\n- `lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT,MODEL`\n- `sudo fdisk -l`\n- `sudo smartctl -a /dev/sdX` (for each drive)\n- `df -h`\n\n**Graphics:**\n- `lspci | grep -i vga`\n- `sudo lshw -C display`\n- `nvidia-smi` (for NVIDIA GPUs)\n- `glxinfo | grep -i \"opengl version\"`\n\n**Network:**\n- `ip addr`\n- `sudo lshw -C network`\n- `iwconfig` (for wireless)\n- `ethtool eth0` (for Ethernet)\n\n**Motherboard/BIOS:**\n- `sudo dmidecode -t baseboard`\n- `sudo dmidecode -t bios`\n- `sudo dmidecode -t system`\n\n**Peripherals:**\n- `lsusb -v`\n- `lspci -v`\n- `aplay -l`\n\n**Thermal:**\n- `sensors`\n- `cat /sys/class/thermal/thermal_zone*/temp`\n\n## Output Format\n\nCreate a structured report with the following sections:\n\n### Executive Summary\n- System type (desktop/laptop/server)\n- Overall hardware generation/age\n- Primary use case capabilities (gaming, development, general use)\n\n### Detailed Hardware Profile\n\n**CPU:**\n- Model: [full CPU name]\n- Cores/Threads: [physical cores]/[logical threads]\n- Base/Max Frequency: [GHz]\n- Cache: L1/L2/L3 sizes\n- Features: [virtualization, security features]\n\n**Memory:**\n- Total: [GB] ([type] @ [speed])\n- Configuration: [X modules in Y slots]\n- Swap: [size] ([type])\n\n**Storage:**\n- Drive 1: [model] ([type]) - [capacity] - Health: [status]\n- Drive 2: ...\n- Total capacity: [TB]\n- Partition layout: [summary]\n\n**Graphics:**\n- GPU: [model]\n- Driver: [version and type]\n- VRAM: [size]\n- Displays: [count and configuration]\n\n**Network:**\n- Ethernet: [model] - [speed]\n- WiFi: [model] - [protocols]\n- Active connections: [summary]\n\n**Motherboard:**\n- Manufacturer: [brand]\n- Model: [model number]\n- BIOS: [version] ([date])\n\n**Peripherals:**\n- [List of notable USB/PCI devices]\n\n**Thermal/Power:**\n- Current temperatures: [CPU/GPU/etc.]\n- Battery: [status if laptop]\n\n### Hardware Capabilities Assessment\n\nRate and describe:\n- **Performance tier**: Entry/Mid/High-end for [CPU/GPU/Storage/RAM]\n- **Bottlenecks**: Identify any limiting components\n- **Upgrade recommendations**: Suggest meaningful upgrades if applicable\n- **Compatibility notes**: Linux driver status, known issues\n\n### AI-Readable Summary (JSON)\n\nProvide a structured JSON object:\n```json\n{\n \"system_type\": \"desktop|laptop|server\",\n \"cpu\": {\n \"model\": \"\",\n \"cores\": 0,\n \"threads\": 0,\n \"base_ghz\": 0.0,\n \"max_ghz\": 0.0\n },\n \"memory\": {\n \"total_gb\": 0,\n \"type\": \"\",\n \"speed_mhz\": 0\n },\n \"storage\": [\n {\n \"device\": \"\",\n \"type\": \"nvme|ssd|hdd\",\n \"capacity_gb\": 0,\n \"health\": \"good|warning|critical\"\n }\n ],\n \"gpu\": {\n \"model\": \"\",\n \"vendor\": \"nvidia|amd|intel\",\n \"driver\": \"\",\n \"vram_gb\": 0\n },\n \"network\": {\n \"ethernet\": {\"present\": true, \"speed_mbps\": 0},\n \"wifi\": {\"present\": true, \"standard\": \"\"}\n }\n}\n```\n\n## Execution Guidelines\n\n1. **Run commands systematically** in the order listed above\n2. **Handle missing tools gracefully**: Note if `inxi`, `hwinfo`, `smartctl`, or `sensors` are not installed\n3. **Use sudo appropriately**: Many hardware queries require root privileges\n4. **Parse output carefully**: Extract relevant information, filter noise\n5. **Cross-reference data**: Verify findings using multiple tools when possible\n6. **Format for readability**: Use tables, bullet points, and clear hierarchies\n7. **Include context**: Add brief explanations for technical specs\n8. **Flag concerns**: Highlight any hardware issues, deprecated drivers, or thermal problems\n\n## Important Notes\n\n- Some commands may require installation of additional packages (`lm-sensors`, `smartmontools`, `pciutils`, etc.)\n- SMART data requires drives that support it (most modern SSDs/HDDs)\n- GPU information varies significantly by vendor\n- Thermal data availability depends on sensor support\n- Always respect privacy: avoid exposing serial numbers in shared contexts\n\nBe thorough, accurate, and provide actionable insights about the hardware configuration.\n", "filepath": "hardware/hardware-profilers/hardware-profile.md" } ], "by-component": [ { "name": "profile-cpu", "category": "by-component", "description": "", "tags": [], "content": "You are performing an exhaustive CPU (processor) profile of the system.\n\n## Your Task\n\nGenerate a comprehensive CPU analysis covering all aspects of processor hardware, configuration, features, and performance.\n\n### 1. CPU Hardware Identification\n- **Vendor**: Intel, AMD, ARM, or other\n- **Model name**: Full processor name\n- **Microarchitecture**: Zen 4, Raptor Lake, etc.\n- **Family**: CPU family number\n- **Model**: CPU model number\n- **Stepping**: CPU stepping/revision\n- **CPU ID**: CPUID signature\n- **Manufacturing process**: Node size (5nm, 7nm, etc.)\n\n### 2. Core and Thread Configuration\n- **Physical cores**: Actual CPU cores\n- **Logical processors**: Total threads (with SMT/HT)\n- **Threads per core**: 1 or 2 (SMT/Hyper-Threading)\n- **Cores per socket**: Core count per CPU\n- **Sockets**: Number of CPU sockets\n- **NUMA nodes**: Non-uniform memory access nodes\n- **Core layout**: Physical topology and placement\n\n### 3. Frequency and Clock Information\n- **Base frequency**: Guaranteed base clock\n- **Maximum boost frequency**: Single-core turbo\n- **All-core boost**: Multi-core sustained boost\n- **Current frequencies**: Per-core current clocks\n- **Frequency scaling**: Available scaling governors\n- **Turbo mode**: Status and configuration\n- **C-states**: Power saving states available\n- **P-states**: Performance states\n\n### 4. Cache Hierarchy\n- **L1 data cache**: Per-core L1D size\n- **L1 instruction cache**: Per-core L1I size\n- **L2 cache**: Per-core or shared L2 size\n- **L3 cache**: Shared last-level cache size\n- **L4 cache**: If present (rare)\n- **Cache line size**: Typical 64 bytes\n- **Cache associativity**: Set-associative configuration\n- **Total cache**: Sum of all cache levels\n\n### 5. CPU Features and Extensions\n- **Instruction sets**: SSE, AVX, AVX2, AVX-512\n- **Virtualization**: VT-x, AMD-V, VT-d, AMD-Vi\n- **Security features**: SGX, SEV, TDX, etc.\n- **AES-NI**: Hardware AES acceleration\n- **SHA extensions**: Hardware SHA acceleration\n- **FMA**: Fused multiply-add\n- **BMI/BMI2**: Bit manipulation instructions\n- **TSX**: Transactional synchronization\n- **Hardware monitoring**: PMU, performance counters\n\n### 6. Virtualization Capabilities\n- **Virtualization enabled**: VT-x/AMD-V status\n- **IOMMU**: VT-d/AMD-Vi for device passthrough\n- **Nested paging**: EPT/RVI support\n- **Nested virtualization**: Capability\n- **Hardware isolation**: SGX, SEV, TDX\n- **Virtual machine extensions**: Available features\n\n### 7. Security Features\n- **CPU vulnerabilities**: Spectre, Meltdown, etc.\n- **Mitigations**: Enabled security mitigations\n- **Performance impact**: Mitigation overhead\n- **Secure boot**: Support status\n- **Memory encryption**: SME, SEV support\n- **Control-flow enforcement**: CET, IBT\n- **Branch prediction**: IBRS, STIBP status\n\n### 8. Thermal and Power Management\n- **TDP**: Thermal design power\n- **Maximum temperature**: Tjunction max\n- **Current temperature**: Per-core temps\n- **Thermal throttling**: Status and history\n- **Power consumption**: Current package power\n- **Power limits**: PL1, PL2 settings\n- **Voltage**: Core voltage\n- **Power states**: C-states and P-states usage\n\n### 9. Performance Characteristics\n- **BogoMIPS**: Rough performance indicator\n- **CPU benchmark**: If available (sysbench, etc.)\n- **Context switch rate**: Scheduler efficiency\n- **Interrupts**: Interrupt rate per second\n- **Load average**: 1, 5, 15 minute averages\n- **CPU utilization**: Per-core usage\n- **Performance counters**: PMU data if accessible\n\n### 10. Memory Controller and Architecture\n- **Memory controller**: Integrated or discrete\n- **Memory channels**: Number of channels\n- **Maximum memory**: Supported RAM capacity\n- **Memory types**: Supported DDR generations\n- **Memory speed**: Maximum supported speed\n- **ECC support**: Error-correcting code capability\n- **Prefetchers**: Hardware prefetch engines\n\n### 11. Interconnect and Topology\n- **CPU interconnect**: QPI, UPI, Infinity Fabric\n- **Interconnect speed**: GT/s or MHz\n- **NUMA configuration**: Node topology\n- **Core-to-core latency**: Inter-core communication\n- **Socket topology**: Multi-socket layout\n- **L3 slicing**: Cache slice distribution\n\n### 12. Microcode and Firmware\n- **Microcode version**: Current CPU microcode\n- **Microcode date**: Release date\n- **Update available**: Check for updates\n- **Speculative execution**: Firmware mitigations\n\n## Commands to Use\n\n**Basic CPU information:**\n- `lscpu`\n- `cat /proc/cpuinfo`\n- `lscpu -e` - Extended CPU list\n- `sudo dmidecode -t processor`\n\n**Detailed specifications:**\n- `lscpu -J` - JSON output for parsing\n- `sudo lshw -class processor`\n- `cpuid` (if installed)\n- `x86info` (if installed, x86 systems)\n\n**Frequency information:**\n- `lscpu | grep MHz`\n- `cat /proc/cpuinfo | grep MHz`\n- `cpufreq-info` (if installed)\n- `cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq`\n- `cat /sys/devices/system/cpu/cpu*/cpufreq/cpuinfo_max_freq`\n- `cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor`\n\n**Cache information:**\n- `lscpu -C`\n- `getconf -a | grep CACHE`\n- `cat /sys/devices/system/cpu/cpu0/cache/index*/size`\n\n**CPU features:**\n- `cat /proc/cpuinfo | grep flags | head -1`\n- `lscpu | grep -i flag`\n\n**Virtualization:**\n- `lscpu | grep -i virtualization`\n- `cat /proc/cpuinfo | grep -E '(vmx|svm)'`\n- `dmesg | grep -i \"vt-d\\|amd-vi\"`\n\n**Security and vulnerabilities:**\n- `lscpu | grep -i vulnerab`\n- `cat /sys/devices/system/cpu/vulnerabilities/*`\n- `spectre-meltdown-checker` (if installed)\n\n**Thermal and power:**\n- `sensors` (if lm-sensors installed)\n- `cat /sys/class/thermal/thermal_zone*/temp`\n- `cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq`\n- `sudo turbostat --quiet --show Package,Core,CPU,Avg_MHz,Busy%,Bzy_MHz,PkgTmp --interval 1` (if available)\n- `sudo powertop --time=10` (if installed)\n\n**Performance monitoring:**\n- `top -bn1 | grep \"Cpu(s)\"`\n- `mpstat -P ALL 1 5` (if sysstat installed)\n- `vmstat 1 5`\n- `uptime`\n- `cat /proc/loadavg`\n\n**Microcode:**\n- `cat /proc/cpuinfo | grep microcode | head -1`\n- `dmesg | grep microcode`\n\n**NUMA topology:**\n- `numactl --hardware`\n- `lscpu | grep NUMA`\n- `cat /sys/devices/system/node/node*/cpulist`\n\n**Benchmarking (optional):**\n- `sysbench cpu --threads=$(nproc) run` (if installed)\n- `7z b` (7-zip benchmark if installed)\n\n## Output Format\n\n### Executive Summary\n```\nCPU: [manufacturer] [model name]\nArchitecture: [microarchitecture] ([process node])\nCores/Threads: [physical cores] cores / [logical threads] threads\nBase/Boost: [base GHz] / [boost GHz]\nCache: [L1] + [L2] + [L3 MB]\nFeatures: [key features like AVX-512, virtualization]\n```\n\n### Detailed CPU Profile\n\n**Hardware Identification:**\n- Vendor: [Intel/AMD/ARM]\n- Model Name: [full processor name]\n- Microarchitecture: [architecture name]\n- Family: [hex family]\n- Model: [hex model]\n- Stepping: [stepping number]\n- CPU ID: [cpuid signature]\n- Manufacturing: [nm process]\n\n**Core Configuration:**\n- Physical Cores: [count]\n- Logical Processors: [count]\n- Threads per Core: [1/2]\n- Sockets: [count]\n- NUMA Nodes: [count]\n- Topology: [description]\n\n**Frequency Information:**\n- Base Frequency: [GHz]\n- Maximum Turbo: [GHz] (single-core)\n- All-Core Turbo: [GHz]\n- Current Frequencies:\n - CPU 0: [MHz]\n - CPU 1: [MHz]\n - ...\n- Scaling Governor: [powersave/performance/schedutil]\n- Turbo Boost: [Enabled/Disabled]\n\n**Cache Hierarchy:**\n- L1 Data Cache: [KB] per core ([total KB])\n- L1 Instruction Cache: [KB] per core ([total KB])\n- L2 Cache: [KB/MB] per core ([total MB])\n- L3 Cache: [MB] shared ([MB] total)\n- Cache Line Size: [bytes]\n- Total Cache: [MB]\n\n**Instruction Set Extensions:**\n- Base: [x86-64-v2/v3/v4]\n- SIMD: [SSE4.2, AVX, AVX2, AVX-512, etc.]\n- Virtualization: [VT-x/AMD-V, VT-d/AMD-Vi]\n- Security: [AES-NI, SHA, SGX, SEV]\n- Other: [FMA, BMI, BMI2, TSX, etc.]\n\n**Feature Flags (Key):**\n```\n[vmx/svm, aes, avx, avx2, avx512f, sha_ni, fma, bmi1, bmi2, etc.]\n```\n\n**Virtualization Capabilities:**\n- VT-x/AMD-V: [Enabled/Disabled]\n- VT-d/AMD-Vi (IOMMU): [Enabled/Disabled]\n- EPT/RVI: [Supported]\n- Nested Virtualization: [Supported/Not Supported]\n- Hardware Isolation: [SGX/SEV/TDX support]\n\n**Security Status:**\n- Vulnerabilities:\n - Spectre v1: [mitigated/vulnerable]\n - Spectre v2: [mitigated/vulnerable]\n - Meltdown: [mitigated/vulnerable]\n - [other vulnerabilities...]\n- Active Mitigations: [list]\n- Performance Impact: [estimated %]\n\n**Thermal and Power:**\n- TDP: [W]\n- Maximum Temperature: [\u00b0C]\n- Current Temperature:\n - Package: [\u00b0C]\n - Core 0: [\u00b0C]\n - Core 1: [\u00b0C]\n - ...\n- Power Consumption: [W]\n- Power Limits: PL1=[W], PL2=[W]\n- Throttling Status: [None/Active]\n\n**Memory Controller:**\n- Controller: [Integrated]\n- Memory Channels: [count]\n- Maximum Memory: [GB]\n- Supported Types: [DDR4, DDR5]\n- Maximum Speed: [MT/s]\n- ECC Support: [Yes/No]\n\n**Current Performance:**\n- CPU Utilization: [%] average\n- Per-Core Usage:\n - CPU 0: [%]\n - CPU 1: [%]\n - ...\n- Load Average: [1min], [5min], [15min]\n- Context Switches: [/sec]\n- Interrupts: [/sec]\n- BogoMIPS: [value]\n\n**NUMA Topology (if applicable):**\n- NUMA Nodes: [count]\n- Node 0 CPUs: [list]\n- Node 1 CPUs: [list]\n- Node 0 Memory: [GB]\n- Node 1 Memory: [GB]\n\n**Microcode:**\n- Version: [hex version]\n- Date: [date if available]\n- Update Status: [check if current]\n\n### Performance Assessment\n\n**Performance Tier:**\n- Consumer: Entry/Mainstream/High-end/Enthusiast\n- Server: Entry/Mid-range/High-end\n- Generation: [relative age]\n\n**Bottleneck Analysis:**\n- Core count: [adequate/limited for workload]\n- Clock speed: [competitive/dated]\n- Cache size: [generous/adequate/limited]\n- Memory channels: [optimal/bottleneck]\n\n**Optimization Recommendations:**\n- Frequency scaling: [suggestions]\n- Power management: [tuning options]\n- NUMA configuration: [if applicable]\n- Security mitigation tuning: [performance vs. security]\n\n### AI-Readable JSON\n\n```json\n{\n \"hardware\": {\n \"vendor\": \"intel|amd|arm\",\n \"model_name\": \"\",\n \"microarchitecture\": \"\",\n \"family\": \"\",\n \"model\": \"\",\n \"stepping\": 0,\n \"process_nm\": 0\n },\n \"cores\": {\n \"physical_cores\": 0,\n \"logical_processors\": 0,\n \"threads_per_core\": 0,\n \"sockets\": 0,\n \"numa_nodes\": 0\n },\n \"frequency\": {\n \"base_ghz\": 0.0,\n \"max_turbo_ghz\": 0.0,\n \"all_core_turbo_ghz\": 0.0,\n \"scaling_governor\": \"\"\n },\n \"cache\": {\n \"l1d_kb_per_core\": 0,\n \"l1i_kb_per_core\": 0,\n \"l2_kb_per_core\": 0,\n \"l3_mb_total\": 0,\n \"total_cache_mb\": 0\n },\n \"features\": {\n \"instruction_sets\": [],\n \"virtualization\": {\n \"vmx_svm\": false,\n \"iommu\": false\n },\n \"security\": {\n \"aes_ni\": false,\n \"sha_extensions\": false,\n \"sgx\": false\n }\n },\n \"thermal_power\": {\n \"tdp_watts\": 0,\n \"max_temp_celsius\": 0,\n \"current_temp_celsius\": 0,\n \"current_power_watts\": 0\n },\n \"memory_controller\": {\n \"channels\": 0,\n \"max_memory_gb\": 0,\n \"supported_types\": [],\n \"ecc_support\": false\n },\n \"vulnerabilities\": {\n \"spectre_v1\": \"\",\n \"spectre_v2\": \"\",\n \"meltdown\": \"\"\n },\n \"microcode\": {\n \"version\": \"\",\n \"date\": \"\"\n }\n}\n```\n\n## Execution Guidelines\n\n1. **Gather comprehensive data**: Use multiple commands to cross-verify\n2. **Parse carefully**: Extract specific values from verbose output\n3. **Check all cores**: Get per-core data where applicable\n4. **Monitor dynamic state**: Capture current frequencies and temps\n5. **Assess features**: Identify valuable CPU capabilities\n6. **Security review**: Check vulnerabilities and mitigations\n7. **Performance context**: Relate specs to real-world capability\n8. **NUMA awareness**: Handle multi-socket systems properly\n9. **Format clearly**: Present technical data accessibly\n10. **Provide insights**: Don't just list specs, interpret them\n\n## Important Notes\n\n- Some commands require root privileges (dmidecode, turbostat)\n- Install lm-sensors and run sensors-detect for thermal monitoring\n- sysstat package needed for mpstat\n- cpuid and x86info provide additional details if installed\n- Virtualization features require BIOS enablement\n- Security mitigations can impact performance significantly\n- Microcode updates are critical for security\n- NUMA topology only relevant for multi-socket systems\n- Thermal data accuracy varies by motherboard\n- Governor settings affect performance and power consumption\n\nBe extremely thorough - capture every detail about the CPU subsystem.\n", "filepath": "hardware/hardware-profilers/by-component/profile-cpu.md" }, { "name": "profile-gpu", "category": "by-component", "description": "", "tags": [], "content": "You are performing an exhaustive GPU (graphics) profile of the system.\n\n## Your Task\n\nGenerate a comprehensive GPU analysis covering all aspects of graphics hardware, configuration, and capabilities.\n\n### 1. GPU Hardware Identification\n- **Vendor**: NVIDIA, AMD, Intel, or other\n- **GPU model**: Full product name\n- **GPU architecture**: Ada Lovelace, RDNA 3, Xe, etc.\n- **Device ID**: PCI device identifier\n- **Subsystem vendor/device**: Card manufacturer\n- **Revision**: GPU revision/stepping\n- **Manufacturing process**: Node size (5nm, 7nm, etc.)\n\n### 2. GPU Specifications\n- **CUDA cores / Stream processors / Execution units**: Compute unit count\n- **Tensor cores / RT cores**: AI and ray tracing hardware\n- **Base clock / Boost clock**: GPU frequencies\n- **Memory size**: VRAM capacity\n- **Memory type**: GDDR6, GDDR6X, HBM2, etc.\n- **Memory bus width**: 128-bit, 256-bit, etc.\n- **Memory bandwidth**: GB/s\n- **TDP**: Thermal design power\n- **Power connectors**: PCIe power requirements\n\n### 3. PCI Configuration\n- **PCI address**: Bus:Device.Function\n- **PCI generation**: PCIe 3.0, 4.0, 5.0\n- **Link width**: x16, x8, x4, etc.\n- **Current link speed**: GT/s\n- **Maximum link speed**: Supported maximum\n- **Link status**: Active, degraded, or optimal\n- **NUMA node**: If in NUMA system\n\n### 4. Display Configuration\n- **Connected displays**: Count and identifiers\n- **Display resolutions**: Per-display native resolution\n- **Refresh rates**: Current refresh rates\n- **Display interfaces**: HDMI, DisplayPort, DVI, VGA\n- **Primary display**: Which output is primary\n- **Display technologies**: G-Sync, FreeSync support\n- **Maximum resolution**: GPU capability\n\n### 5. Driver Information\n- **Driver type**: Proprietary or open-source\n- **Driver version**: Current installed version\n- **Driver date**: Release date\n- **Kernel module**: Module name and version\n- **Mesa version**: For open-source drivers\n- **X.Org driver**: X driver in use\n- **Wayland support**: Compositor compatibility\n- **Vulkan driver**: Vulkan ICD in use\n\n### 6. Graphics API Support\n- **OpenGL version**: Maximum supported version\n- **OpenGL renderer**: Renderer string\n- **Vulkan version**: Vulkan API version\n- **Vulkan extensions**: Count and key extensions\n- **OpenCL version**: Compute API version\n- **Direct3D support**: Wine/Proton capabilities\n- **Video decode**: Hardware decode support (NVDEC, VCE, etc.)\n- **Video encode**: Hardware encode support (NVENC, VCN, etc.)\n\n### 7. GPU Clocks and Power State\n- **Current GPU clock**: Real-time frequency\n- **Current memory clock**: VRAM frequency\n- **Current power draw**: Watts\n- **Power state**: P-state (P0-P12)\n- **Performance level**: Performance mode\n- **Fan speed**: Current fan RPM/%\n- **GPU temperature**: Current temp in \u00b0C\n- **Throttling status**: Thermal or power throttling\n\n### 8. GPU Memory Details\n- **Total VRAM**: Total video memory\n- **Used VRAM**: Currently allocated\n- **Free VRAM**: Available memory\n- **Bar size**: PCIe BAR size (Resizable BAR)\n- **Memory controller**: Type and capabilities\n- **ECC support**: Error correction capability\n\n### 9. Compute Capabilities\n- **CUDA version**: For NVIDIA (if applicable)\n- **Compute capability**: CUDA compute version\n- **ROCm support**: For AMD\n- **OpenCL devices**: Available compute devices\n- **Tensor core support**: AI acceleration\n- **Ray tracing support**: RT core capability\n- **Matrix operations**: INT8, FP16, TF32, etc.\n\n### 10. Multi-GPU Configuration\n- **Number of GPUs**: Total graphics cards\n- **SLI/CrossFire**: Multi-GPU mode status\n- **GPU topology**: How GPUs are connected\n- **Per-GPU details**: Individual stats for each GPU\n\n## Commands to Use\n\n**Basic GPU detection:**\n- `lspci | grep -i vga`\n- `lspci | grep -i 3d`\n- `sudo lshw -C display`\n- `lspci -v -s $(lspci | grep VGA | cut -d' ' -f1)`\n\n**Detailed PCI information:**\n- `sudo lspci -vv | grep -A 20 VGA`\n- `sudo lspci -nnk | grep -A 3 VGA`\n\n**NVIDIA-specific:**\n- `nvidia-smi`\n- `nvidia-smi -q` - Detailed query\n- `nvidia-smi -q -d CLOCK` - Clock details\n- `nvidia-smi -q -d MEMORY` - Memory details\n- `nvidia-smi -q -d TEMPERATURE` - Thermal info\n- `nvidia-smi -q -d POWER` - Power details\n- `nvidia-smi -q -d PIDS` - Process info\n- `nvidia-smi topo -m` - Topology matrix\n- `nvidia-settings -q all` - All settings\n\n**AMD-specific:**\n- `rocm-smi`\n- `radeontop` (if installed)\n- `sudo cat /sys/kernel/debug/dri/0/amdgpu_pm_info`\n- `sudo cat /sys/class/drm/card*/device/pp_dpm_sclk`\n- `clinfo` - OpenCL info\n\n**Intel-specific:**\n- `intel_gpu_top` (if installed)\n- `intel_gpu_frequency` - GPU frequency info\n- `vainfo` - VA-API information\n\n**Graphics API information:**\n- `glxinfo | grep -i \"opengl version\"`\n- `glxinfo | grep -i \"opengl renderer\"`\n- `vulkaninfo --summary`\n- `vulkaninfo` - Full Vulkan details\n- `clinfo` - OpenCL capabilities\n- `vdpauinfo` - VDPAU support\n- `vainfo` - VA-API support\n\n**Driver information:**\n- `modinfo nvidia` (for NVIDIA)\n- `modinfo amdgpu` (for AMD)\n- `modinfo i915` (for Intel)\n- `glxinfo | grep -i \"opengl core profile version\"`\n- `dpkg -l | grep nvidia` (driver packages)\n\n**Display information:**\n- `xrandr --verbose`\n- `xrandr --listmonitors`\n- `kscreen-doctor -o` (for KDE)\n- `wayland-info` (if on Wayland)\n\n**System files:**\n- `cat /proc/driver/nvidia/version`\n- `cat /sys/class/drm/card*/device/uevent`\n- `cat /sys/kernel/debug/dri/0/name`\n\n## Output Format\n\n### Executive Summary\n```\nGPU: [manufacturer] [model]\nArchitecture: [architecture name]\nVRAM: [X] GB [memory type]\nDriver: [type] v[version]\nCompute: CUDA [version] / ROCm [version] / OpenCL [version]\nAPI Support: OpenGL [v], Vulkan [v]\n```\n\n### Detailed GPU Profile\n\n**Hardware Identification:**\n- Vendor: [NVIDIA/AMD/Intel]\n- Model: [full model name]\n- Architecture: [codename/architecture]\n- Device ID: [PCI ID]\n- Subsystem: [manufacturer]\n- Manufacturing: [nm process]\n\n**GPU Specifications:**\n- Compute Units: [count] [CUDA cores/SPs/EUs]\n- Tensor Cores: [count] (if applicable)\n- RT Cores: [count] (if applicable)\n- Base Clock: [MHz]\n- Boost Clock: [MHz]\n- Memory: [GB] [type]\n- Memory Bus: [bit]-bit\n- Bandwidth: [GB/s]\n- TDP: [W]\n\n**PCI Configuration:**\n- PCI Address: [bus:dev.func]\n- PCIe Generation: [3.0/4.0/5.0]\n- Link Width: x[16/8/4]\n- Current Speed: [GT/s]\n- Max Speed: [GT/s]\n- Link Status: [Optimal/Degraded]\n\n**Display Configuration:**\n- Connected Displays: [count]\n - Display 1: [resolution]@[Hz] via [interface]\n - Display 2: ...\n- Primary Display: [identifier]\n- Adaptive Sync: [G-Sync/FreeSync/None]\n\n**Driver Information:**\n- Driver Type: [Proprietary/Open Source]\n- Driver Version: [version]\n- Release Date: [date]\n- Kernel Module: [module name]\n- Mesa Version: [version] (if applicable)\n- X.Org Driver: [driver name]\n- Wayland Support: [Yes/No]\n\n**Graphics API Support:**\n- OpenGL: [version]\n- OpenGL Renderer: [string]\n- Vulkan: [version]\n- Vulkan Extensions: [count]\n- OpenCL: [version]\n- Hardware Video Decode: [NVDEC/VCE/VA-API]\n- Hardware Video Encode: [NVENC/VCN/QSV]\n\n**Current GPU State:**\n- GPU Clock: [MHz]\n- Memory Clock: [MHz]\n- Power Draw: [W] / [TDP W]\n- Power State: [P-state]\n- Temperature: [\u00b0C]\n- Fan Speed: [RPM / %]\n- Throttling: [None/Thermal/Power]\n\n**Memory Status:**\n- Total VRAM: [GB]\n- Used VRAM: [GB] ([%])\n- Free VRAM: [GB]\n- BAR Size: [MB] (Resizable BAR: [Enabled/Disabled])\n\n**Compute Capabilities:**\n- CUDA Version: [version] (Compute [X.X])\n- Tensor Core Support: [Yes/No]\n- RT Core Support: [Yes/No]\n- Precision Support: FP64, FP32, FP16, INT8, [TF32]\n- ROCm Version: [version] (for AMD)\n- OpenCL Devices: [count]\n\n**Performance and Optimization:**\n- PCIe Link Utilization: [assessment]\n- Resizable BAR: [status and impact]\n- Driver Optimization: [recommendations]\n- Compute Configuration: [assessment]\n\n### Multi-GPU Configuration (if applicable)\n```\nGPU 0: [model] - [details]\nGPU 1: [model] - [details]\nTopology: [description]\nSLI/CrossFire: [status]\n```\n\n### AI-Readable JSON\n\n```json\n{\n \"hardware\": {\n \"vendor\": \"nvidia|amd|intel\",\n \"model\": \"\",\n \"architecture\": \"\",\n \"device_id\": \"\",\n \"manufacturing_process_nm\": 0\n },\n \"specifications\": {\n \"compute_units\": 0,\n \"tensor_cores\": 0,\n \"rt_cores\": 0,\n \"base_clock_mhz\": 0,\n \"boost_clock_mhz\": 0,\n \"vram_gb\": 0,\n \"memory_type\": \"\",\n \"memory_bus_bits\": 0,\n \"bandwidth_gbs\": 0,\n \"tdp_watts\": 0\n },\n \"pci\": {\n \"address\": \"\",\n \"generation\": \"3.0|4.0|5.0\",\n \"link_width\": 0,\n \"current_speed_gts\": 0,\n \"max_speed_gts\": 0,\n \"resizable_bar\": false\n },\n \"driver\": {\n \"type\": \"proprietary|open_source\",\n \"version\": \"\",\n \"kernel_module\": \"\",\n \"mesa_version\": \"\"\n },\n \"api_support\": {\n \"opengl_version\": \"\",\n \"vulkan_version\": \"\",\n \"opencl_version\": \"\",\n \"cuda_version\": \"\",\n \"compute_capability\": \"\"\n },\n \"current_state\": {\n \"gpu_clock_mhz\": 0,\n \"memory_clock_mhz\": 0,\n \"power_draw_watts\": 0,\n \"temperature_celsius\": 0,\n \"fan_speed_percent\": 0,\n \"vram_used_gb\": 0,\n \"vram_total_gb\": 0\n },\n \"displays\": [\n {\n \"resolution\": \"\",\n \"refresh_rate_hz\": 0,\n \"interface\": \"\"\n }\n ],\n \"compute\": {\n \"tensor_core_supported\": false,\n \"rt_core_supported\": false,\n \"precisions\": []\n }\n}\n```\n\n## Execution Guidelines\n\n1. **Detect GPU vendor first**: Tailor commands to detected hardware\n2. **Use vendor-specific tools**: nvidia-smi, rocm-smi, intel_gpu_top\n3. **Gather PCI details**: Critical for PCIe performance assessment\n4. **Check driver status**: Ensure drivers are properly loaded\n5. **Query all APIs**: OpenGL, Vulkan, OpenCL for full picture\n6. **Monitor dynamic state**: Clocks, temps, power in real-time\n7. **Assess configuration**: Identify bottlenecks or misconfigurations\n8. **Check for updates**: Compare installed vs. latest drivers\n9. **Multi-GPU awareness**: Handle systems with multiple GPUs\n10. **Format comprehensively**: Include all gathered data\n\n## Important Notes\n\n- Some commands require specific driver packages installed\n- NVIDIA requires proprietary drivers for full functionality\n- AMD open-source drivers have varying feature support\n- Intel drivers are generally built into kernel\n- Vulkan requires vulkan-tools package\n- OpenCL requires vendor-specific implementations\n- Some features require newer kernel versions\n- Virtual machines may have limited GPU information\n- Secure boot may affect driver installation\n- Wayland vs. X11 may affect available information\n\nBe extremely thorough - capture every detail about the graphics subsystem.\n", "filepath": "hardware/hardware-profilers/by-component/profile-gpu.md" }, { "name": "profile-motherboard", "category": "by-component", "description": "", "tags": [], "content": "You are performing an exhaustive motherboard (system board) profile of the system.\n\n## Your Task\n\nGenerate a comprehensive motherboard analysis covering all aspects of the system board, chipset, firmware, and connectivity.\n\n### 1. Motherboard Identification\n- **Manufacturer**: Board manufacturer (ASUS, Gigabyte, MSI, etc.)\n- **Product name**: Board model/SKU\n- **Version**: Board revision number\n- **Serial number**: Board serial number\n- **Asset tag**: Asset tag (if configured)\n- **Location in chassis**: Board location descriptor\n- **Board type**: Motherboard, server board, embedded, etc.\n\n### 2. Chipset Information\n- **Chipset manufacturer**: Intel, AMD, etc.\n- **Chipset model**: Specific chipset name\n- **Chipset features**: Key capabilities\n- **PCH/FCH revision**: Platform controller hub revision\n- **South bridge**: Legacy south bridge info\n- **North bridge**: Legacy north bridge info (if separate)\n\n### 3. BIOS/UEFI Firmware\n- **Firmware type**: BIOS or UEFI\n- **Vendor**: BIOS manufacturer (AMI, Award, Phoenix, etc.)\n- **Version**: Current BIOS version\n- **Release date**: BIOS release date\n- **Revision**: Firmware major/minor revision\n- **ROM size**: BIOS ROM capacity\n- **UEFI mode**: Legacy or UEFI boot mode\n- **Secure boot**: Status and configuration\n\n### 4. Expansion Slots\n- **PCIe slots**: Count and generations (PCIe 3.0/4.0/5.0)\n- **Slot types**: x16, x8, x4, x1 configurations\n- **Slot usage**: Which slots are occupied\n- **M.2 slots**: Count, key types, and generations\n- **Legacy slots**: PCI, AGP (if any)\n- **Slot sharing**: Lane sharing configurations\n\n### 5. Storage Controllers and Interfaces\n- **SATA ports**: Count and generation (SATA II/III)\n- **SATA controllers**: Onboard controller details\n- **NVMe support**: M.2 NVMe slot count and PCIe lanes\n- **RAID support**: Hardware RAID capabilities\n- **Storage modes**: AHCI, RAID, IDE\n- **eSATA**: External SATA ports\n- **U.2/U.3**: Enterprise NVMe support\n\n### 6. I/O Connectivity\n- **USB controllers**: USB controller chipsets\n- **USB ports**: Count by version (2.0, 3.0, 3.1, 3.2, 4.0)\n- **USB-C ports**: Count and capabilities\n- **Thunderbolt**: Version and port count\n- **Internal USB headers**: Front panel USB headers\n- **PS/2 ports**: Legacy keyboard/mouse ports\n- **Serial ports**: RS-232 COM ports\n- **Parallel port**: LPT port (rare)\n\n### 7. Network Interfaces\n- **Ethernet controllers**: Onboard NIC chipsets\n- **Ethernet ports**: Count and speeds (1G, 2.5G, 10G)\n- **WiFi**: Onboard WiFi chipset and standard\n- **Bluetooth**: Bluetooth version\n- **MAC addresses**: Physical addresses of NICs\n\n### 8. Audio Subsystem\n- **Audio codec**: Onboard audio chipset\n- **Audio channels**: 2.0, 5.1, 7.1 support\n- **Audio ports**: Line-in, line-out, mic, optical\n- **Audio features**: Special audio technologies\n- **HDMI/DP audio**: Audio over display connections\n\n### 9. Display and Graphics\n- **Integrated graphics**: iGPU support (if applicable)\n- **Display outputs**: HDMI, DisplayPort, DVI, VGA\n- **Multi-monitor**: Maximum displays supported\n- **Display port versions**: HDMI 2.1, DP 1.4, etc.\n\n### 10. Power Delivery\n- **Power phases**: VRM phase count\n- **Power connectors**: ATX 24-pin, EPS 8-pin, etc.\n- **CPU power**: 4-pin, 8-pin, 8+4 pin configuration\n- **PCIe power**: Additional PCIe power headers\n- **Fan headers**: Count and type (PWM/DC)\n- **RGB headers**: Addressable RGB headers\n- **Power monitoring**: Voltage monitoring points\n\n### 11. Memory Support\n- **Memory slots**: Total DIMM slots\n- **Maximum capacity**: Maximum RAM supported\n- **Memory types**: DDR4, DDR5, ECC support\n- **Memory speeds**: Supported frequencies\n- **Memory channels**: Dual, quad channel\n- **XMP/DOCP**: Overclocking profile support\n\n### 12. Form Factor and Physical\n- **Form factor**: ATX, Micro-ATX, Mini-ITX, EATX, etc.\n- **Dimensions**: Board dimensions\n- **Mounting holes**: Standoff pattern\n- **Contained devices**: Onboard devices count\n\n### 13. Special Features\n- **Overclocking**: OC features and BIOS options\n- **TPM**: Trusted Platform Module version\n- **BIOS flashback**: No-CPU BIOS update\n- **Q-Flash/M-Flash**: Motherboard-specific tools\n- **POST code display**: Onboard debug LEDs/display\n- **Dual BIOS**: Backup BIOS chip\n- **Clear CMOS**: CMOS reset button/jumper\n- **BIOS recovery**: Recovery mechanisms\n\n### 14. Temperature and Monitoring\n- **Temperature sensors**: Onboard sensor locations\n- **Fan control**: Hardware fan control capabilities\n- **Voltage monitoring**: Monitored voltage rails\n- **Hardware monitoring chip**: Super I/O or monitoring IC\n\n### 15. System Slots and Headers\n- **Front panel headers**: Power, reset, LED headers\n- **Internal headers**: All internal connectors\n- **System fan headers**: Chassis fan connectors\n- **Pump headers**: Water cooling pump headers\n- **Addressable RGB**: ARGB/DRGB header count\n- **Temperature probe headers**: External sensor inputs\n\n## Commands to Use\n\n**Motherboard identification:**\n- `sudo dmidecode -t baseboard`\n- `sudo dmidecode -t 2`\n- `cat /sys/class/dmi/id/board_vendor`\n- `cat /sys/class/dmi/id/board_name`\n- `cat /sys/class/dmi/id/board_version`\n\n**BIOS/UEFI information:**\n- `sudo dmidecode -t bios`\n- `sudo dmidecode -t 0`\n- `cat /sys/class/dmi/id/bios_vendor`\n- `cat /sys/class/dmi/id/bios_version`\n- `cat /sys/class/dmi/id/bios_date`\n- `efibootmgr -v` (if UEFI)\n- `[ -d /sys/firmware/efi ] && echo \"UEFI\" || echo \"BIOS\"`\n\n**Chipset information:**\n- `lspci | grep -i \"ISA bridge\"`\n- `lspci -v | grep -A 10 \"ISA bridge\"`\n- `sudo dmidecode -t 9` - System slots\n\n**PCI/PCIe slots and devices:**\n- `lspci -tv` - Tree view of PCI devices\n- `lspci -vv` - Verbose PCI information\n- `sudo dmidecode -t 9` - System slot information\n- `sudo lspci -vvv -s ` - Specific slot details\n\n**Storage controllers:**\n- `lspci | grep -i \"sata\\|raid\\|storage\"`\n- `lspci -v | grep -A 10 -i \"sata\\|ahci\"`\n- `ls /sys/class/ata_port/` - SATA ports\n- `ls /sys/block/nvme*` - NVMe devices\n\n**USB controllers:**\n- `lspci | grep -i usb`\n- `lsusb -v`\n- `lsusb -t` - USB device tree\n- `cat /sys/kernel/debug/usb/devices`\n\n**Network controllers:**\n- `lspci | grep -i \"ethernet\\|network\"`\n- `sudo lshw -class network`\n- `ip link show`\n\n**Audio controller:**\n- `lspci | grep -i audio`\n- `aplay -l`\n- `cat /proc/asound/cards`\n\n**System information:**\n- `sudo dmidecode -t system`\n- `sudo dmidecode -t chassis`\n- `sudo lshw -short`\n- `sudo lshw -businfo`\n\n**Hardware monitoring:**\n- `sensors` (if lm-sensors configured)\n- `cat /sys/class/hwmon/hwmon*/name`\n- `sudo i2cdetect -l` (I2C buses)\n\n**Firmware and boot:**\n- `sudo dmidecode -t 13` - BIOS language\n- `bootctl status` (systemd-boot)\n- `efibootmgr -v` (UEFI variables)\n\n**Memory slots:**\n- `sudo dmidecode -t 16` - Physical memory array\n- `sudo dmidecode -t 17` - Memory devices\n\n**Expansion and slots:**\n- `sudo dmidecode -t 9` - System slots\n- `sudo biosdecode` - Additional BIOS info\n\n**TPM and security:**\n- `cat /sys/class/tpm/tpm0/device/description`\n- `tpm2_getcap properties-fixed` (if TPM 2.0 tools)\n\n## Output Format\n\n### Executive Summary\n```\nMotherboard: [manufacturer] [model] (rev [version])\nChipset: [chipset model]\nForm Factor: [ATX/mATX/ITX]\nBIOS: [vendor] v[version] ([date])\nFeatures: [key features]\n```\n\n### Detailed Motherboard Profile\n\n**Board Identification:**\n- Manufacturer: [vendor]\n- Product Name: [model]\n- Version: [revision]\n- Serial Number: [S/N]\n- Asset Tag: [tag]\n- Type: [motherboard type]\n\n**Chipset:**\n- Manufacturer: [Intel/AMD]\n- Model: [chipset name]\n- Revision: [revision]\n- Features: [key capabilities]\n\n**BIOS/UEFI:**\n- Type: [BIOS/UEFI]\n- Vendor: [manufacturer]\n- Version: [version]\n- Release Date: [date]\n- Revision: [major.minor]\n- ROM Size: [KB/MB]\n- Boot Mode: [Legacy/UEFI]\n- Secure Boot: [Enabled/Disabled]\n\n**Expansion Slots:**\n- PCIe x16 Slots: [count] (Gen [3.0/4.0/5.0])\n - Slot 1: PCIe [gen] x16 - [occupied by: device]\n - Slot 2: PCIe [gen] x16 (runs at x8) - [status]\n- PCIe x1 Slots: [count]\n- M.2 Slots: [count]\n - M.2_1: Key M, PCIe [gen] x4 - [device]\n - M.2_2: Key M, PCIe [gen] x4 - [empty]\n\n**Storage Interfaces:**\n- SATA Ports: [count] x SATA [II/III]\n- SATA Controller: [chipset model]\n- NVMe Support: [count] x M.2 slots (PCIe [gen] x4)\n- RAID Support: [0, 1, 5, 10]\n- Storage Mode: [AHCI/RAID/IDE]\n\n**I/O Connectivity:**\n- USB Controllers: [chipset models]\n- USB Ports:\n - USB 2.0: [count] ports\n - USB 3.0/3.1 Gen 1: [count] ports\n - USB 3.1 Gen 2: [count] ports\n - USB 3.2 Gen 2x2: [count] ports\n - USB4/Thunderbolt: [count] ports\n- USB-C: [count] ports ([capabilities])\n- Internal USB Headers: [count]\n- Legacy Ports: [PS/2, Serial, Parallel]\n\n**Network:**\n- Ethernet Controllers: [chipset models]\n- Ethernet Ports: [count] x [1G/2.5G/10G]\n- WiFi: [chipset] ([802.11 standard])\n- Bluetooth: [version]\n\n**Audio:**\n- Audio Codec: [chipset model]\n- Channels: [2.0/5.1/7.1]\n- Audio Ports: [count and types]\n- Features: [special audio tech]\n\n**Display Outputs (if integrated graphics):**\n- HDMI: [count] x HDMI [version]\n- DisplayPort: [count] x DP [version]\n- DVI: [count] ports\n- VGA: [count] ports\n\n**Power Delivery:**\n- VRM Phases: [count]-phase ([digital/analog])\n- ATX Power: 24-pin\n- CPU Power: [4/8/8+4]-pin\n- PCIe Power: [auxiliary power headers]\n- Fan Headers: [count] ([PWM/DC])\n- RGB Headers: [count] ([ARGB/RGB])\n\n**Memory Support:**\n- DIMM Slots: [count]\n- Maximum Capacity: [GB]\n- Memory Type: [DDR4/DDR5]\n- Supported Speeds: Up to [MT/s]\n- Channel Mode: [Dual/Quad] Channel\n- ECC Support: [Yes/No]\n- XMP/DOCP: [version]\n\n**Form Factor:**\n- Standard: [ATX/mATX/Mini-ITX/EATX]\n- Dimensions: [mm x mm]\n- Mounting: [ATX standard]\n\n**Special Features:**\n- TPM: [version] ([enabled/disabled])\n- BIOS Flashback: [Yes/No]\n- Dual BIOS: [Yes/No]\n- POST Code Display: [Yes/No]\n- Clear CMOS: [Button/Jumper]\n- Overclocking: [features list]\n\n**Temperature Monitoring:**\n- Sensors: [locations]\n- Fan Control: [PWM headers count]\n- Voltage Monitoring: [rails monitored]\n- Monitoring Chip: [IC model]\n\n### Connectivity Matrix\n\n```\nPCIe Slot Layout:\nSlot 1: PCIe 4.0 x16 (CPU) \u2192 [GPU installed]\nSlot 2: PCIe 4.0 x16 (runs at x4, chipset) \u2192 [empty]\nSlot 3: PCIe 3.0 x1 (chipset) \u2192 [WiFi card]\nM.2_1: PCIe 4.0 x4 (CPU) \u2192 [NVMe SSD]\nM.2_2: PCIe 3.0 x4 (chipset) \u2192 [empty]\n\nStorage Ports:\nSATA0-SATA3: [devices]\nSATA4-SATA7: [empty]\n```\n\n### Upgrade and Expansion Potential\n\n- Available PCIe slots: [count and type]\n- Available M.2 slots: [count]\n- RAM expansion: [X GB current / Y GB max]\n- Storage expansion: [available ports]\n- BIOS updates: [status]\n\n### AI-Readable JSON\n\n```json\n{\n \"board\": {\n \"manufacturer\": \"\",\n \"product_name\": \"\",\n \"version\": \"\",\n \"serial_number\": \"\",\n \"form_factor\": \"ATX|mATX|ITX|EATX\"\n },\n \"chipset\": {\n \"manufacturer\": \"intel|amd\",\n \"model\": \"\",\n \"revision\": \"\"\n },\n \"bios\": {\n \"type\": \"BIOS|UEFI\",\n \"vendor\": \"\",\n \"version\": \"\",\n \"release_date\": \"\",\n \"secure_boot\": false\n },\n \"expansion_slots\": {\n \"pcie_x16\": [\n {\n \"slot_number\": 1,\n \"generation\": \"3.0|4.0|5.0\",\n \"lanes\": 16,\n \"occupied\": true,\n \"device\": \"\"\n }\n ],\n \"pcie_x1\": 0,\n \"m2_slots\": 0\n },\n \"storage\": {\n \"sata_ports\": 0,\n \"sata_generation\": \"II|III\",\n \"nvme_slots\": 0,\n \"raid_support\": []\n },\n \"io\": {\n \"usb\": {\n \"usb_2_0\": 0,\n \"usb_3_0\": 0,\n \"usb_3_1\": 0,\n \"usb_3_2\": 0,\n \"usb_c\": 0\n },\n \"ethernet_ports\": 0,\n \"wifi\": false,\n \"bluetooth\": false\n },\n \"audio\": {\n \"codec\": \"\",\n \"channels\": \"\"\n },\n \"power\": {\n \"vrm_phases\": 0,\n \"fan_headers\": 0,\n \"rgb_headers\": 0\n },\n \"memory\": {\n \"dimm_slots\": 0,\n \"max_capacity_gb\": 0,\n \"type\": \"DDR4|DDR5\",\n \"max_speed_mts\": 0,\n \"ecc_support\": false\n },\n \"features\": {\n \"tpm\": \"\",\n \"bios_flashback\": false,\n \"dual_bios\": false,\n \"post_code_display\": false\n }\n}\n```\n\n## Execution Guidelines\n\n1. **Use dmidecode extensively**: Primary source for board info\n2. **Cross-reference with lspci**: Verify chipset and slots\n3. **Check physical vs. logical**: Some slots share lanes\n4. **Document slot usage**: What's installed where\n5. **Identify chipset features**: What the board can do\n6. **BIOS version importance**: Check for updates\n7. **Expansion planning**: Available upgrade paths\n8. **Power delivery assessment**: Adequacy for components\n9. **I/O inventory**: Complete port count\n10. **Format comprehensively**: Present all findings clearly\n\n## Important Notes\n\n- Requires root/sudo for most detailed information\n- dmidecode is the primary tool for board identification\n- Some data may not be available in virtual machines\n- BIOS version is critical for compatibility and security\n- PCIe lane sharing is common on consumer boards\n- M.2 slots may disable SATA ports when used\n- Form factor determines case compatibility\n- VRM quality affects overclocking and stability\n- TPM may require BIOS enablement\n- UEFI vs. Legacy affects boot configuration\n- Some features require specific BIOS settings\n- Motherboard manual provides definitive specifications\n\nBe extremely thorough - document every aspect of the motherboard.\n", "filepath": "hardware/hardware-profilers/by-component/profile-motherboard.md" }, { "name": "profile-ram", "category": "by-component", "description": "", "tags": [], "content": "You are performing an exhaustive RAM (memory) profile of the system.\n\n## Your Task\n\nGenerate a comprehensive memory analysis covering all aspects of RAM configuration, performance, and utilization.\n\n### 1. Memory Module Inventory\n- **Number of modules**: Total DIMMs installed\n- **Slots used/available**: Occupied vs. total slots\n- **Module locations**: Which slots contain modules\n- **Form factor**: DIMM, SO-DIMM, etc.\n- **Module manufacturers**: Per-module vendor\n- **Part numbers**: Specific module part numbers\n- **Serial numbers**: Per-module serial numbers\n\n### 2. Memory Specifications\n- **Total capacity**: System total in GB\n- **Per-module capacity**: Size of each DIMM\n- **Memory type**: DDR3, DDR4, DDR5, LPDDR, etc.\n- **Speed ratings**: Configured speed and maximum speed\n- **Clock frequency**: MT/s or MHz\n- **Voltage**: Operating voltage (1.2V, 1.35V, 1.5V, etc.)\n- **Data width**: 64-bit, 72-bit (ECC)\n- **Total width**: Physical bus width\n\n### 3. Memory Timings and Performance\n- **CAS latency**: Primary timing (CL)\n- **RAS to CAS delay**: tRCD\n- **Row precharge time**: tRP\n- **Row active time**: tRAS\n- **Command rate**: 1T or 2T\n- **XMP/DOCP profiles**: Available overclocking profiles\n- **Current vs. rated speed**: Compare actual to maximum\n- **Memory bandwidth**: Theoretical and actual\n\n### 4. Memory Technology Features\n- **ECC support**: Error-correcting code capability\n- **Channel configuration**: Single, dual, triple, quad channel\n- **Rank configuration**: Single rank, dual rank per module\n- **Memory controller**: Integrated vs. discrete\n- **NUMA configuration**: Non-uniform memory access (multi-CPU systems)\n- **Interleaving**: Memory interleaving status\n\n### 5. Current Memory Usage\n- **Total memory**: Available to system\n- **Used memory**: Currently allocated\n- **Free memory**: Completely unused\n- **Available memory**: Free + reclaimable\n- **Buffers**: Kernel buffer cache\n- **Cached**: Page cache\n- **Active/Inactive**: Hot and cold memory\n- **Dirty memory**: Modified pages not yet written\n- **Writeback**: Currently being written back\n\n### 6. Swap Configuration\n- **Swap total**: Total swap space\n- **Swap used**: Currently used swap\n- **Swap type**: Partition, file, or zram\n- **Swappiness**: Kernel swap tendency (0-100)\n- **Swap devices**: List of swap locations\n- **Swap priority**: If multiple swap devices\n\n### 7. Memory Pressure and Performance\n- **Page faults**: Major and minor fault rates\n- **Swap in/out rates**: If swap is active\n- **Memory pressure**: OOM events, thrashing indicators\n- **Huge pages**: Transparent huge pages configuration\n- **NUMA statistics**: Memory locality (if applicable)\n- **Memory errors**: ECC errors if supported\n\n### 8. Virtual Memory Configuration\n- **Virtual memory parameters**: vm.swappiness, vm.vfs_cache_pressure\n- **Overcommit settings**: Memory overcommit mode\n- **OOM killer settings**: Out-of-memory behavior\n- **Huge page configuration**: Transparent huge pages, huge page pool\n\n## Commands to Use\n\n**DMI/Hardware information:**\n- `sudo dmidecode -t memory`\n- `sudo dmidecode -t 16` - Physical memory array\n- `sudo dmidecode -t 17` - Memory device details\n\n**Memory status:**\n- `free -h`\n- `cat /proc/meminfo`\n- `vmstat -s`\n- `vmstat 1 5` - Memory statistics over time\n\n**Module details:**\n- `sudo lshw -class memory`\n- `sudo decode-dimms` - Detailed DIMM info (if i2c-tools installed)\n\n**Performance and timings:**\n- `sudo dmidecode -t memory | grep -i speed`\n- `sudo dmidecode -t memory | grep -i timing`\n- `cat /sys/devices/system/edac/mc/mc*/dimm*/dimm_label` - DIMM labels\n\n**Memory bandwidth:**\n- `sudo dmidecode -t memory | grep -i bandwidth`\n- Use `sysbench memory` for benchmarking (if installed)\n\n**Swap information:**\n- `swapon --show`\n- `cat /proc/swaps`\n- `sysctl vm.swappiness`\n\n**Virtual memory tuning:**\n- `sysctl -a | grep vm.`\n- `cat /proc/sys/vm/overcommit_memory`\n\n**Memory errors (ECC systems):**\n- `sudo edac-util -v` (if available)\n- `sudo ras-mc-ctl --errors`\n\n**NUMA information:**\n- `numactl --hardware` (if NUMA system)\n- `cat /proc/buddyinfo`\n\n## Output Format\n\n### Executive Summary\n```\nMemory Configuration: [total] GB, [type] @ [speed] MT/s\nModules: [X] x [Y]GB ([channel] channel, [rank] rank)\nTechnology: [ECC/Non-ECC], [feature highlights]\nCurrent Usage: [X]% ([used]/[total] GB)\n```\n\n### Detailed Memory Profile\n\n**Module Inventory:**\n```\nSlot 1 (DIMM_A1): [manufacturer] [part-number]\n - Capacity: [GB]\n - Type: [DDR4/DDR5]\n - Speed: [MT/s]\n - Voltage: [V]\n - Serial: [S/N]\n\nSlot 2 (DIMM_A2): ...\n```\n\n**Memory Configuration:**\n- Total Capacity: [X] GB\n- Memory Type: [DDR4/DDR5]\n- Channel Mode: [Dual/Quad] Channel\n- Configured Speed: [MT/s] ([MHz])\n- Maximum Supported Speed: [MT/s]\n- Voltage: [V]\n- ECC: [Enabled/Disabled/Not Supported]\n\n**Memory Timings:**\n- CAS Latency: [CL]\n- tRCD: [ns]\n- tRP: [ns]\n- tRAS: [ns]\n- Command Rate: [1T/2T]\n\n**Current Usage Statistics:**\n```\nTotal: [X] GB\nUsed: [Y] GB ([Z]%)\nFree: [A] GB\nAvailable: [B] GB\nBuffers: [C] MB\nCached: [D] GB\nActive: [E] GB\nInactive: [F] GB\n```\n\n**Swap Configuration:**\n- Swap Total: [X] GB ([partition/file/zram])\n- Swap Used: [Y] GB ([Z]%)\n- Swappiness: [value]\n- Devices: [list]\n\n**Performance Metrics:**\n- Page Faults: [rate] per second\n- Swap Activity: [in/out rates]\n- Memory Bandwidth: [theoretical GB/s]\n- Huge Pages: [configured/available]\n\n**Virtual Memory Tuning:**\n- vm.swappiness: [value]\n- vm.vfs_cache_pressure: [value]\n- vm.overcommit_memory: [value]\n- Transparent Huge Pages: [enabled/disabled]\n\n### Memory Assessment\n\n**Configuration Analysis:**\n- Channel utilization: [optimal/suboptimal]\n- Speed optimization: [running at spec/underclocked]\n- Capacity per channel: [balanced/unbalanced]\n- Upgrade path: [recommendations]\n\n**Performance Considerations:**\n- Memory pressure: [low/medium/high]\n- Swap usage: [analysis]\n- Bottleneck assessment: [findings]\n\n### AI-Readable JSON\n\n```json\n{\n \"memory_modules\": [\n {\n \"slot\": \"\",\n \"manufacturer\": \"\",\n \"part_number\": \"\",\n \"serial_number\": \"\",\n \"capacity_gb\": 0,\n \"type\": \"DDR4|DDR5\",\n \"speed_mts\": 0,\n \"voltage\": 0.0,\n \"form_factor\": \"DIMM|SO-DIMM\"\n }\n ],\n \"configuration\": {\n \"total_capacity_gb\": 0,\n \"memory_type\": \"\",\n \"channel_mode\": \"single|dual|quad\",\n \"configured_speed_mts\": 0,\n \"max_speed_mts\": 0,\n \"ecc_enabled\": false,\n \"slots_used\": 0,\n \"slots_total\": 0\n },\n \"timings\": {\n \"cas_latency\": 0,\n \"trcd\": 0,\n \"trp\": 0,\n \"tras\": 0\n },\n \"usage\": {\n \"total_gb\": 0.0,\n \"used_gb\": 0.0,\n \"free_gb\": 0.0,\n \"available_gb\": 0.0,\n \"cached_gb\": 0.0,\n \"usage_percent\": 0.0\n },\n \"swap\": {\n \"total_gb\": 0.0,\n \"used_gb\": 0.0,\n \"type\": \"partition|file|zram\",\n \"swappiness\": 0\n },\n \"features\": {\n \"ecc_supported\": false,\n \"numa\": false,\n \"huge_pages_enabled\": false\n }\n}\n```\n\n## Execution Guidelines\n\n1. **Use sudo liberally**: Most detailed memory info requires root\n2. **Parse dmidecode carefully**: Extract all per-DIMM details\n3. **Cross-reference data**: Verify findings using multiple sources\n4. **Calculate derived values**: Bandwidth, channel utilization, etc.\n5. **Check for errors**: Look for memory error logs\n6. **Assess configuration**: Identify optimization opportunities\n7. **Consider upgrade paths**: Suggest meaningful improvements\n8. **Monitor dynamic metrics**: Capture usage over brief period\n\n## Important Notes\n\n- Some details require specific tools (i2c-tools for SPD data)\n- ECC information only available on systems with ECC support\n- Memory timings may not be fully exposed on all systems\n- Virtual machines may not expose full memory details\n- NUMA information only relevant for multi-CPU systems\n- Benchmark tools (sysbench, memtester) can provide additional insights\n\nBe extremely thorough - capture every detail about the memory subsystem.\n", "filepath": "hardware/hardware-profilers/by-component/profile-ram.md" } ], "clis": [ { "name": "install-brew", "category": "clis", "description": "", "tags": [], "content": "# Install Homebrew on Linux\n\nYou are helping the user install Homebrew (brew) package manager on Linux.\n\n## Your tasks:\n\n1. **Check if Homebrew is already installed:**\n - Check: `which brew`\n - If installed: `brew --version`\n - If already installed, ask if they want to update or reconfigure it\n\n2. **Check prerequisites:**\n Homebrew requires:\n - Git: `git --version`\n - Curl: `curl --version`\n - GCC: `gcc --version`\n - Build essentials\n\n Install missing prerequisites:\n ```bash\n sudo apt update\n sudo apt install build-essential procps curl file git\n ```\n\n3. **Download and run Homebrew installer:**\n ```bash\n /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n ```\n\n The script will:\n - Install to `/home/linuxbrew/.linuxbrew` (multi-user) or `~/.linuxbrew` (single user)\n - Set up necessary directories\n - Install Homebrew\n\n4. **Add Homebrew to PATH:**\n The installer will suggest adding Homebrew to your PATH. Add to ~/.bashrc or ~/.profile:\n\n ```bash\n echo 'eval \"$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)\"' >> ~/.bashrc\n source ~/.bashrc\n ```\n\n Or for single-user installation:\n ```bash\n echo 'eval \"$($HOME/.linuxbrew/bin/brew shellenv)\"' >> ~/.bashrc\n source ~/.bashrc\n ```\n\n5. **Verify installation:**\n ```bash\n brew --version\n which brew\n brew doctor\n ```\n\n6. **Run brew doctor and fix issues:**\n `brew doctor` will check for common issues. Follow its recommendations:\n - Install recommended dependencies\n - Fix PATH issues\n - Update outdated software\n\n7. **Install recommended packages:**\n Homebrew recommends installing gcc:\n ```bash\n brew install gcc\n ```\n\n8. **Configure Homebrew (optional):**\n - Disable analytics: `brew analytics off`\n - Set up auto-update preferences\n - Configure tap repositories\n\n9. **Show basic Homebrew usage:**\n Explain to the user:\n - `brew install ` - Install a package\n - `brew uninstall ` - Remove a package\n - `brew upgrade` - Upgrade all packages\n - `brew update` - Update Homebrew itself\n - `brew list` - List installed packages\n - `brew search ` - Search for packages\n - `brew info ` - Get package info\n - `brew doctor` - Check for issues\n - `brew cleanup` - Remove old versions\n\n10. **Set up common taps (optional):**\n Ask if user wants popular taps:\n ```bash\n brew tap homebrew/cask-fonts # for fonts\n brew tap homebrew/cask-versions # for alternative versions\n ```\n\n11. **Handle path conflicts:**\n Check if Homebrew binaries conflict with system packages:\n ```bash\n which -a python3\n which -a git\n ```\n Explain that Homebrew packages take precedence if in PATH correctly.\n\n12. **Performance optimization:**\n - Set up Homebrew bottle (binary package) cache\n - Configure number of parallel downloads:\n ```bash\n echo 'export HOMEBREW_MAKE_JOBS=4' >> ~/.bashrc\n ```\n\n13. **Provide best practices:**\n - Run `brew update` regularly\n - Run `brew upgrade` to keep packages current\n - Run `brew cleanup` to free up space\n - Use `brew doctor` to diagnose issues\n - Pin packages you don't want upgraded: `brew pin `\n - Prefer Homebrew for development tools, apt for system packages\n - Don't run brew with sudo\n\n## Important notes:\n- Homebrew on Linux is called \"Linuxbrew\"\n- Don't use sudo with brew commands\n- Homebrew compiles from source if no bottle (binary) is available\n- Can coexist with apt/apt-get\n- Takes up significant disk space\n- Compilation can take time\n- Keep PATH properly configured\n", "filepath": "installation/clis/install-brew.md" }, { "name": "install-gh-cli", "category": "clis", "description": "", "tags": [], "content": "# Install and Authenticate GitHub CLI (gh)\n\nYou are helping the user install and authenticate the GitHub CLI tool.\n\n## Your tasks:\n\n1. **Check if gh is already installed:**\n ```bash\n which gh\n gh --version\n ```\n\n If already installed and authenticated:\n ```bash\n gh auth status\n ```\n\n2. **Install GitHub CLI (if not installed):**\n\n **Method 1: Using official repository (recommended):**\n ```bash\n # Add the GPG key\n sudo mkdir -p -m 755 /etc/apt/keyrings\n wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null\n sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg\n\n # Add the repository\n echo \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main\" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null\n\n # Install\n sudo apt update\n sudo apt install gh\n ```\n\n **Method 2: Using snap:**\n ```bash\n sudo snap install gh\n ```\n\n **Method 3: Using Homebrew (if installed):**\n ```bash\n brew install gh\n ```\n\n3. **Verify installation:**\n ```bash\n gh --version\n which gh\n ```\n\n4. **Authenticate with GitHub:**\n\n **Interactive authentication (recommended):**\n ```bash\n gh auth login\n ```\n\n This will prompt for:\n - GitHub.com or GitHub Enterprise Server\n - Preferred protocol (HTTPS or SSH)\n - Authentication method (web browser or token)\n\n **Via web browser (easiest):**\n - Select \"Login with a web browser\"\n - Follow the one-time code and URL\n - Authorize in browser\n\n **Via token:**\n - Generate a token at https://github.com/settings/tokens\n - Select \"Login with authentication token\"\n - Paste the token\n\n5. **Verify authentication:**\n ```bash\n gh auth status\n ```\n\n Should show:\n - Logged in to github.com\n - Account name\n - Token scopes\n\n6. **Configure gh settings:**\n\n **Set default editor:**\n ```bash\n gh config set editor vim\n # or\n gh config set editor nano\n # or\n gh config set editor code # VS Code\n ```\n\n **Set default protocol:**\n ```bash\n gh config set git_protocol ssh\n # or\n gh config set git_protocol https\n ```\n\n **View all config:**\n ```bash\n gh config list\n ```\n\n7. **Set up SSH key (if using SSH protocol):**\n ```bash\n # Generate SSH key if needed\n ssh-keygen -t ed25519 -C \"your_email@example.com\"\n\n # Add to ssh-agent\n eval \"$(ssh-agent -s)\"\n ssh-add ~/.ssh/id_ed25519\n\n # Add to GitHub\n gh ssh-key add ~/.ssh/id_ed25519.pub --title \"My Ubuntu Desktop\"\n\n # Or copy public key to GitHub manually\n cat ~/.ssh/id_ed25519.pub\n ```\n\n8. **Test GitHub connectivity:**\n ```bash\n # Test SSH connection\n ssh -T git@github.com\n\n # Test gh CLI\n gh repo list\n gh auth status\n ```\n\n9. **Configure git to use gh for credentials:**\n ```bash\n gh auth setup-git\n ```\n\n This configures git to use gh as a credential helper.\n\n10. **Show basic gh commands:**\n\n **Repository operations:**\n - `gh repo create` - Create a repository\n - `gh repo clone ` - Clone a repository\n - `gh repo view` - View repository details\n - `gh repo list` - List your repositories\n - `gh repo fork` - Fork a repository\n\n **Pull requests:**\n - `gh pr create` - Create a pull request\n - `gh pr list` - List pull requests\n - `gh pr view ` - View a PR\n - `gh pr checkout ` - Checkout a PR\n - `gh pr merge ` - Merge a PR\n - `gh pr review ` - Review a PR\n\n **Issues:**\n - `gh issue create` - Create an issue\n - `gh issue list` - List issues\n - `gh issue view ` - View an issue\n - `gh issue close ` - Close an issue\n\n **Workflows:**\n - `gh workflow list` - List workflows\n - `gh workflow view ` - View workflow\n - `gh workflow run ` - Trigger a workflow\n - `gh run list` - List workflow runs\n - `gh run view ` - View a run\n\n **Gists:**\n - `gh gist create ` - Create a gist\n - `gh gist list` - List gists\n - `gh gist view ` - View a gist\n\n11. **Set up shell completion:**\n\n **For bash:**\n ```bash\n gh completion -s bash > ~/.gh-completion.bash\n echo 'source ~/.gh-completion.bash' >> ~/.bashrc\n source ~/.bashrc\n ```\n\n **For zsh:**\n ```bash\n gh completion -s zsh > ~/.gh-completion.zsh\n echo 'source ~/.gh-completion.zsh' >> ~/.zshrc\n source ~/.zshrc\n ```\n\n12. **Configure multiple accounts (if needed):**\n ```bash\n # Add another account\n GH_HOST=github.com gh auth login\n\n # Switch between accounts\n gh auth switch\n ```\n\n13. **Set up aliases (optional):**\n ```bash\n gh alias set pv 'pr view'\n gh alias set co 'pr checkout'\n gh alias set bugs 'issue list --label=bug'\n ```\n\n List aliases:\n ```bash\n gh alias list\n ```\n\n14. **Authenticate with GitHub Enterprise (if applicable):**\n ```bash\n gh auth login --hostname github.example.com\n ```\n\n15. **Troubleshooting common issues:**\n\n **Permission denied:**\n - Check auth status: `gh auth status`\n - Re-authenticate: `gh auth login`\n - Check token scopes\n\n **SSH issues:**\n - Verify SSH key: `ssh -T git@github.com`\n - Add SSH key to GitHub: `gh ssh-key add`\n - Check ssh-agent: `ssh-add -l`\n\n **Rate limiting:**\n - Check rate limit: `gh api rate_limit`\n - Use authentication to increase limits\n\n16. **Update gh:**\n ```bash\n sudo apt update\n sudo apt upgrade gh\n # or\n brew upgrade gh\n # or\n sudo snap refresh gh\n ```\n\n17. **Advanced configuration:**\n\n **Custom API endpoint:**\n ```bash\n gh config set api_endpoint https://api.github.com\n ```\n\n **Disable prompts:**\n ```bash\n gh config set prompt disabled\n ```\n\n **Configure pager:**\n ```bash\n gh config set pager less\n ```\n\n18. **Security best practices:**\n - Use SSH keys instead of HTTPS when possible\n - Use tokens with minimal required scopes\n - Rotate tokens regularly\n - Don't share tokens\n - Use different tokens for different machines\n - Enable 2FA on GitHub account\n - Review authorized applications regularly\n\n19. **Provide workflow examples:**\n\n **Create a repo and push:**\n ```bash\n mkdir my-project\n cd my-project\n git init\n echo \"# My Project\" > README.md\n git add README.md\n git commit -m \"Initial commit\"\n gh repo create my-project --public --source=. --push\n ```\n\n **Fork and clone:**\n ```bash\n gh repo fork owner/repo --clone\n ```\n\n **Create PR from current branch:**\n ```bash\n gh pr create --title \"My changes\" --body \"Description of changes\"\n ```\n\n20. **Report findings:**\n Summarize:\n - Installation status\n - Authentication status\n - Configured settings\n - Available accounts\n - Next steps\n\n## Important notes:\n- gh is the official GitHub CLI\n- Requires GitHub account\n- Can use HTTPS or SSH protocol\n- SSH is generally more secure and convenient\n- gh can replace many git operations with simpler syntax\n- Shell completion is very helpful\n- Keep gh updated for latest features\n- Multiple accounts are supported\n- Works with both GitHub.com and GitHub Enterprise\n- Tokens should have minimal required scopes\n", "filepath": "installation/clis/install-gh-cli.md" }, { "name": "install-pipx", "category": "clis", "description": "", "tags": [], "content": "# Install pipx and Suggest Packages\n\nYou are helping the user install pipx and suggesting useful packages to install with it.\n\n## Your tasks:\n\n1. **Explain what pipx is:**\n pipx is a tool to install and run Python applications in isolated environments. Unlike pip which installs packages globally or in the current environment, pipx creates a separate virtual environment for each application.\n\n2. **Check if pipx is already installed:**\n ```bash\n which pipx\n pipx --version\n ```\n\n If already installed:\n ```bash\n pipx list\n ```\n\n3. **Install pipx:**\n\n **Method 1: Using apt (Ubuntu 23.04+):**\n ```bash\n sudo apt update\n sudo apt install pipx\n ```\n\n **Method 2: Using pip:**\n ```bash\n python3 -m pip install --user pipx\n python3 -m pipx ensurepath\n ```\n\n **Method 3: Using Homebrew (if installed):**\n ```bash\n brew install pipx\n ```\n\n4. **Ensure pipx is on PATH:**\n ```bash\n pipx ensurepath\n ```\n\n Then restart shell or:\n ```bash\n source ~/.bashrc\n ```\n\n5. **Verify installation:**\n ```bash\n pipx --version\n which pipx\n pipx list\n ```\n\n6. **Explain pipx benefits:**\n - Each app in isolated environment (no dependency conflicts)\n - Easy to install/uninstall applications\n - Applications available system-wide\n - No need to activate virtual environments\n - Perfect for CLI tools\n - Automatic PATH configuration\n\n7. **Show basic pipx usage:**\n - `pipx install ` - Install a package\n - `pipx uninstall ` - Uninstall a package\n - `pipx list` - List installed packages\n - `pipx upgrade ` - Upgrade a package\n - `pipx upgrade-all` - Upgrade all packages\n - `pipx run ` - Run without installing\n - `pipx inject ` - Add dependency to app\n\n8. **Suggest essential Python CLI tools:**\n\n **Development tools:**\n ```bash\n pipx install black # Code formatter\n pipx install flake8 # Linter\n pipx install pylint # Code analyzer\n pipx install mypy # Static type checker\n pipx install isort # Import sorter\n pipx install autopep8 # Auto formatter\n pipx install bandit # Security linter\n ```\n\n **Project management:**\n ```bash\n pipx install poetry # Dependency management\n pipx install pipenv # Virtual environment manager\n pipx install cookiecutter # Project templates\n pipx install tox # Testing automation\n ```\n\n **Productivity tools:**\n ```bash\n pipx install httpie # HTTP client (better than curl)\n pipx install youtube-dl # Download videos\n pipx install yt-dlp # youtube-dl fork (maintained)\n pipx install tldr # Simplified man pages\n pipx install howdoi # Code search from command line\n ```\n\n **Data science & analysis:**\n ```bash\n pipx install jupyter # Jupyter notebooks\n pipx install jupyterlab # JupyterLab\n pipx install datasette # Data exploration\n pipx install csvkit # CSV tools\n ```\n\n **File & text processing:**\n ```bash\n pipx install pdfplumber # PDF text extraction\n pipx install pdf2image # PDF to image converter\n pipx install rich-cli # Rich text in terminal\n pipx install glances # System monitoring\n ```\n\n **Cloud & infrastructure:**\n ```bash\n pipx install ansible # Automation\n pipx install aws-cli # AWS command line\n pipx install httpie # API testing\n pipx install docker-compose # Docker orchestration\n ```\n\n **Documentation:**\n ```bash\n pipx install mkdocs # Documentation generator\n pipx install sphinx # Documentation tool\n pipx install doc8 # Documentation linter\n ```\n\n **Testing & quality:**\n ```bash\n pipx install pytest # Testing framework\n pipx install coverage # Code coverage\n pipx install pre-commit # Git hooks manager\n ```\n\n9. **Suggest packages based on user's interests:**\n Ask the user what they work with:\n - Web development?\n - Data science?\n - DevOps?\n - Security?\n - Content creation?\n\n Then suggest relevant packages.\n\n10. **Install a few essential packages:**\n Recommend installing at minimum:\n ```bash\n pipx install httpie # Better HTTP client\n pipx install tldr # Quick command help\n pipx install black # Python formatter (if they code)\n pipx install glances # System monitor\n ```\n\n11. **Show how to use pipx run (temporary usage):**\n ```bash\n # Run without installing\n pipx run pycowsay \"Hello!\"\n pipx run black --version\n\n # Useful for one-off tasks\n pipx run cookiecutter gh:audreyr/cookiecutter-pypackage\n ```\n\n12. **Show how to manage installations:**\n\n **List all installed apps:**\n ```bash\n pipx list\n pipx list --verbose\n ```\n\n **Upgrade specific package:**\n ```bash\n pipx upgrade black\n ```\n\n **Upgrade all packages:**\n ```bash\n pipx upgrade-all\n ```\n\n **Uninstall package:**\n ```bash\n pipx uninstall black\n ```\n\n **Reinstall package:**\n ```bash\n pipx reinstall black\n ```\n\n13. **Show how to inject additional dependencies:**\n Some apps need extra packages:\n ```bash\n pipx install ansible\n pipx inject ansible ansible-lint\n pipx inject ansible molecule\n ```\n\n14. **Configure pipx:**\n\n **Check current configuration:**\n ```bash\n pipx environment\n ```\n\n **Change installation location (if needed):**\n ```bash\n export PIPX_HOME=~/.local/pipx\n export PIPX_BIN_DIR=~/.local/bin\n ```\n\n15. **Show differences between pip and pipx:**\n - `pip install ` - Installs in current environment\n - `pipx install ` - Installs in isolated environment\n - Use pip for: libraries, dependencies\n - Use pipx for: CLI applications, standalone tools\n\n16. **Troubleshooting:**\n\n **Package not in PATH:**\n ```bash\n pipx ensurepath\n source ~/.bashrc\n echo $PATH | grep .local/bin\n ```\n\n **Broken installation:**\n ```bash\n pipx reinstall \n ```\n\n **Clean up:**\n ```bash\n pipx uninstall-all\n ```\n\n17. **Advanced usage:**\n\n **Specify Python version:**\n ```bash\n pipx install --python python3.11 black\n ```\n\n **Install from git:**\n ```bash\n pipx install git+https://github.com/user/repo.git\n ```\n\n **Install with extras:**\n ```bash\n pipx install 'package[extra1,extra2]'\n ```\n\n18. **Integration with other tools:**\n\n **pre-commit integration:**\n ```bash\n pipx install pre-commit\n pre-commit install\n ```\n\n **VSCode integration:**\n - Installed tools (black, flake8, mypy) are auto-detected\n - No need to install in each project\n\n19. **Maintenance commands:**\n ```bash\n # Update pipx itself\n python3 -m pip install --user --upgrade pipx\n\n # Upgrade all installed packages\n pipx upgrade-all\n\n # List outdated packages\n pipx list --verbose | grep -A 2 \"upgrade available\"\n ```\n\n20. **Provide recommendations:**\n - Use pipx for all Python CLI tools\n - Keep applications updated with `pipx upgrade-all`\n - Don't use pip for global installations anymore\n - Use `pipx run` to try packages before installing\n - Install project-specific tools (black, flake8) with pipx\n - Consider adding `pipx upgrade-all` to crontab\n - Keep separate from project dependencies (use venv/poetry for those)\n\n21. **Show common workflow:**\n ```bash\n # Install essential tools\n pipx install black\n pipx install flake8\n pipx install mypy\n\n # In your project\n cd my-project\n black .\n flake8 .\n mypy .\n\n # No need to activate virtual environment!\n ```\n\n## Important notes:\n- pipx requires Python 3.6+\n- Each app gets its own virtual environment\n- Apps are available system-wide after installation\n- Perfect for CLI tools, not for libraries\n- Keeps system Python clean\n- No dependency conflicts between apps\n- Must be on PATH - use `pipx ensurepath`\n- Can coexist with pip and venv\n- Use pip for project dependencies, pipx for tools\n- Regular updates recommended: `pipx upgrade-all`\n", "filepath": "installation/clis/install-pipx.md" }, { "name": "install-sdkman", "category": "clis", "description": "", "tags": [], "content": "# Install SDKMAN on Linux\n\nYou are helping the user install SDKMAN for managing parallel versions of multiple Software Development Kits.\n\n## Your tasks:\n\n1. **Check if SDKMAN is already installed:**\n - Check: `sdk version` or `which sdk`\n - Check for SDKMAN directory: `ls -la ~/.sdkman`\n - If already installed, ask if they want to update it\n\n2. **Check prerequisites:**\n SDKMAN requires:\n - curl: `curl --version`\n - zip/unzip: `which zip unzip`\n - bash or zsh shell\n\n Install missing prerequisites:\n ```bash\n sudo apt update\n sudo apt install curl zip unzip\n ```\n\n3. **Download and install SDKMAN:**\n ```bash\n curl -s \"https://get.sdkman.io\" | bash\n ```\n\n The installer will:\n - Install to `~/.sdkman`\n - Add initialization to ~/.bashrc or ~/.zshrc\n - Set up sdk command\n\n4. **Initialize SDKMAN in current shell:**\n ```bash\n source \"$HOME/.sdkman/bin/sdkman-init.sh\"\n ```\n\n5. **Verify installation:**\n ```bash\n sdk version\n sdk help\n ```\n\n6. **Show available SDKs:**\n ```bash\n sdk list\n ```\n\n Common SDKs available:\n - Java (various distributions: OpenJDK, Graal, Corretto, etc.)\n - Gradle\n - Maven\n - Kotlin\n - Scala\n - Groovy\n - Spring Boot\n - Micronaut\n - And many more\n\n7. **Install a few common SDKs (ask user first):**\n\n **Java:**\n ```bash\n sdk list java\n sdk install java # installs latest stable\n # Or specific version:\n # sdk install java 17.0.9-tem\n ```\n\n **Gradle:**\n ```bash\n sdk install gradle\n ```\n\n **Maven:**\n ```bash\n sdk install maven\n ```\n\n8. **Show basic SDKMAN usage:**\n Explain to the user:\n - `sdk list ` - List available versions of an SDK\n - `sdk install ` - Install specific version\n - `sdk install ` - Install latest stable\n - `sdk uninstall ` - Remove a version\n - `sdk use ` - Use version for current shell\n - `sdk default ` - Set default version\n - `sdk current ` - Show current version in use\n - `sdk current` - Show all current versions\n - `sdk upgrade ` - Upgrade to latest version\n - `sdk update` - Update SDK list\n - `sdk selfupdate` - Update SDKMAN itself\n\n9. **Configure SDKMAN (optional):**\n Edit `~/.sdkman/etc/config`:\n\n ```bash\n # Auto answer 'yes' to all prompts\n sdkman_auto_answer=true\n\n # Automatically use Java version from .sdkmanrc\n sdkman_auto_env=true\n\n # Check for SDK updates on login\n sdkman_checkup_enable=true\n\n # Automatically selfupdate\n sdkman_selfupdate_enable=true\n ```\n\n10. **Set up project-specific SDK versions:**\n Create `.sdkmanrc` file in project root:\n ```bash\n java=17.0.9-tem\n gradle=8.4\n maven=3.9.5\n ```\n\n Enable auto-env: `sdk env` or `sdk env install`\n\n11. **Verify PATH and environment:**\n ```bash\n which java\n java -version\n echo $JAVA_HOME\n ```\n\n12. **Show how to switch Java versions:**\n Demonstrate:\n ```bash\n sdk list java\n sdk install java 11.0.21-tem\n sdk install java 17.0.9-tem\n sdk use java 11.0.21-tem\n java -version\n sdk default java 17.0.9-tem\n ```\n\n13. **Offline mode (optional):**\n Explain offline mode for when internet is unavailable:\n ```bash\n sdk offline enable\n sdk offline disable\n ```\n\n14. **Flush and clean:**\n ```bash\n sdk flush # Clear caches\n sdk flush temp # Clear temporary files\n ```\n\n15. **Provide best practices:**\n - Run `sdk update` regularly to refresh SDK lists\n - Run `sdk selfupdate` to keep SDKMAN current\n - Use `sdk current` to verify active versions\n - Use `.sdkmanrc` files for project-specific versions\n - Enable `sdkman_auto_env` for automatic version switching\n - Use `sdk env` when entering project directories\n - Keep multiple Java versions for different projects\n - Set sensible defaults with `sdk default`\n - SDKMAN doesn't require sudo\n - Check `~/.sdkman/candidates/` to see installed SDKs\n\n16. **Troubleshooting:**\n - If `sdk` command not found, source the init script\n - Check `~/.bashrc` has SDKMAN initialization\n - Restart shell or source bashrc: `source ~/.bashrc`\n - Check PATH: `echo $PATH | grep sdkman`\n - Verify SDKMAN directory exists: `ls ~/.sdkman`\n\n## Important notes:\n- SDKMAN is user-level, doesn't require sudo\n- Each SDK version is kept separate in `~/.sdkman/candidates/`\n- Can coexist with system Java or other installation methods\n- Using `sdk use` only affects current shell\n- Using `sdk default` affects all new shells\n- .sdkmanrc files are project-specific\n- SDKMAN is particularly popular in JVM ecosystem\n- Bash completion is included\n", "filepath": "installation/clis/install-sdkman.md" }, { "name": "install-yadm", "category": "clis", "description": "", "tags": [], "content": "# Install YADM (Yet Another Dotfiles Manager)\n\nYou are helping the user install and set up YADM for managing their dotfiles.\n\n## Your tasks:\n\n1. **Check if YADM is already installed:**\n - Check: `which yadm`\n - If installed: `yadm version`\n - If already installed, ask the user if they want to:\n - Configure it for first use\n - Upgrade to the latest version\n - Or exit\n\n2. **Install YADM:**\n\n **Option 1: Using apt (recommended for Ubuntu):**\n ```bash\n sudo apt update\n sudo apt install yadm\n ```\n\n **Option 2: Using the install script (for latest version):**\n ```bash\n curl -fsSL https://github.com/TheLocehiliosan/yadm/raw/master/bootstrap/install_yadm.sh | sudo bash\n ```\n\n Ask the user which installation method they prefer.\n\n3. **Verify installation:**\n - Check version: `yadm version`\n - Check location: `which yadm`\n\n4. **Initialize YADM (if user wants to set it up):**\n\n **For new setup:**\n ```bash\n yadm init\n ```\n\n **For cloning existing dotfiles:**\n Ask the user if they have an existing dotfiles repository to clone.\n If yes, get the repository URL and run:\n ```bash\n yadm clone \n ```\n\n5. **Guide the user through initial configuration:**\n\n **Add existing dotfiles:**\n Suggest common dotfiles to track:\n - `~/.bashrc`\n - `~/.bash_profile`\n - `~/.profile`\n - `~/.gitconfig`\n - `~/.ssh/config` (if exists)\n - `~/.config/` directories (ask which ones)\n\n Show how to add files:\n ```bash\n yadm add ~/.bashrc\n yadm add ~/.gitconfig\n yadm commit -m \"Initial dotfiles commit\"\n ```\n\n6. **Set up remote repository (if user wants):**\n Ask if they want to set up a remote repository:\n ```bash\n yadm remote add origin \n yadm push -u origin main\n ```\n\n7. **Explain basic YADM usage:**\n - `yadm status` - Check status\n - `yadm add ` - Track a file\n - `yadm commit -m \"message\"` - Commit changes\n - `yadm push` - Push to remote\n - `yadm pull` - Pull from remote\n - `yadm list` - List tracked files\n - `yadm diff` - Show differences\n\n8. **Set up encryption (optional):**\n Ask if the user wants to encrypt sensitive files:\n ```bash\n echo \".ssh/id_rsa\" >> ~/.config/yadm/encrypt\n yadm encrypt\n ```\n\n9. **Set up bootstrap (optional):**\n Explain that YADM can run a bootstrap script on new systems.\n Offer to create a basic `~/.config/yadm/bootstrap` script:\n ```bash\n #!/bin/bash\n # Install common packages\n sudo apt update\n sudo apt install -y git vim tmux\n ```\n\n10. **Provide next steps and best practices:**\n - Regularly commit dotfile changes: `yadm add -u && yadm commit -m \"Update dotfiles\"`\n - Use branches for experimental configurations\n - Use `.config/yadm/encrypt` for sensitive files\n - Consider alternate files for different systems (using YADM's alternate file feature)\n - Backup remote repository (GitHub/GitLab)\n\n## Important notes:\n- Ask before making any commits or pushes\n- Explain the difference between YADM and regular git (YADM operates on $HOME)\n- Warn about not committing sensitive information unencrypted\n- If user already has dotfiles in a git repo, explain migration process\n- Be clear that YADM commands work like git commands\n", "filepath": "installation/clis/install-yadm.md" }, { "name": "setup-aws-cli", "category": "clis", "description": "Set up or validate AWS CLI configuration", "tags": [ "cloud", "aws", "setup", "validation", "project", "gitignored" ], "content": "You are helping the user set up or validate their AWS CLI configuration.\n\n## Process\n\n1. **Check if AWS CLI is installed**\n - Run `aws --version` to check installation\n - If not installed, install using: `sudo apt install awscli` or `pip3 install awscli --upgrade --user`\n\n2. **Check existing configuration**\n - Run `aws configure list` to see current config\n - Check `~/.aws/credentials` and `~/.aws/config` files if they exist\n\n3. **Validate configuration**\n - If credentials exist, test with: `aws sts get-caller-identity`\n - This will confirm the credentials are valid and show account info\n\n4. **Configure if needed**\n - If not configured or user wants to update:\n - Run `aws configure` interactively OR\n - Ask user for: AWS Access Key ID, Secret Access Key, default region, output format\n - Offer to set up profiles if user has multiple AWS accounts\n\n5. **Additional setup suggestions**\n - Suggest installing `aws-shell` for better CLI experience\n - Recommend setting up AWS SSO if applicable\n - Suggest configuring MFA if not already set up\n\n## Output\n\nProvide a summary showing:\n- AWS CLI version\n- Configured profiles\n- Current default profile and region\n- Validation status\n- Any recommendations for improvement", "filepath": "installation/clis/setup-aws-cli.md" }, { "name": "setup-b2-cli", "category": "clis", "description": "Set up or validate Backblaze B2 CLI configuration", "tags": [ "cloud", "b2", "backblaze", "setup", "validation", "project", "gitignored" ], "content": "You are helping the user set up or validate their Backblaze B2 CLI configuration.\n\n## Process\n\n1. **Check if B2 CLI is installed**\n - Run `b2 version` to check installation\n - If not installed, install using: `pip3 install b2 --upgrade --user` or `sudo apt install backblaze-b2`\n\n2. **Check existing authorization**\n - Run `b2 get-account-info` to see if already authorized\n - Check `~/.b2_account_info` if it exists\n\n3. **Validate configuration**\n - If authorized, test by listing buckets: `b2 list-buckets`\n - Verify account ID and key are working\n\n4. **Configure if needed**\n - If not configured or user wants to update:\n - Ask user for Application Key ID and Application Key\n - Run `b2 authorize-account `\n - Alternatively, use `b2 clear-account` first if re-authorizing\n\n5. **Additional setup**\n - Show available buckets: `b2 list-buckets`\n - Suggest setting up lifecycle rules if doing backups\n - Recommend testing upload/download with a small file\n\n## Output\n\nProvide a summary showing:\n- B2 CLI version\n- Authorization status\n- Account ID (if authorized)\n- List of buckets (if any)\n- Any recommendations for optimization", "filepath": "installation/clis/setup-b2-cli.md" }, { "name": "setup-rclone", "category": "clis", "description": "Set up rclone for cloud storage management", "tags": [ "cloud", "rclone", "setup", "backup", "project", "gitignored" ], "content": "You are helping the user set up rclone for cloud storage management.\n\n## Process\n\n1. **Check if rclone is installed**\n - Run `rclone version` to check installation\n - If not installed, install using: `sudo apt install rclone` or download from rclone.org\n\n2. **Check existing remotes**\n - Run `rclone listremotes` to see configured remotes\n - Run `rclone config file` to show config file location\n\n3. **Configure new remotes if needed**\n - Run `rclone config` for interactive setup\n - Guide user through:\n - Choosing storage type (S3, B2, Google Drive, etc.)\n - Entering credentials\n - Testing connection\n\n4. **Validate existing remotes**\n - For each remote, test with: `rclone lsd :`\n - Verify access and permissions\n\n5. **Optimization suggestions**\n - Suggest setting up encrypted remotes for sensitive data\n - Recommend bandwidth limits if needed: `--bwlimit`\n - Suggest useful flags for backups: `--transfers`, `--checkers`\n - Offer to create wrapper scripts for common operations\n\n## Output\n\nProvide a summary showing:\n- rclone version\n- List of configured remotes with types\n- Validation status for each remote\n- Suggested next steps or optimizations", "filepath": "installation/clis/setup-rclone.md" } ], "guis": [ { "name": "input-remapper", "category": "guis", "description": "", "tags": [], "content": "Install Intput Remapper.\n", "filepath": "installation/guis/input-remapper.md" } ], "conda": [ { "name": "manage-conda-environments", "category": "conda", "description": "", "tags": [], "content": "# Manage Conda Environments\n\nYou are helping the user list conda environments and work with them to add packages.\n\n## Your tasks:\n\n1. **Check if conda is installed:**\n ```bash\n which conda\n conda --version\n conda info\n ```\n\n If not installed, offer to help install Miniconda or Anaconda.\n\n2. **List all conda environments:**\n ```bash\n conda env list\n # or\n conda info --envs\n ```\n\n This shows:\n - All environment names\n - Their locations\n - Current active environment (marked with *)\n\n3. **Show current environment:**\n ```bash\n echo $CONDA_DEFAULT_ENV\n conda info --envs | grep \"*\"\n ```\n\n4. **Display detailed environment information:**\n For each environment, show:\n ```bash\n # List packages in specific environment\n conda list -n \n\n # Show environment details\n conda env export -n \n\n # Show size\n du -sh ~/miniconda3/envs/\n # or\n du -sh ~/anaconda3/envs/\n ```\n\n5. **Ask user which environment to work with:**\n Present the list and ask which environment they want to modify or examine.\n\n6. **Activate environment:**\n ```bash\n conda activate \n ```\n\n Verify activation:\n ```bash\n conda info --envs\n python --version\n which python\n ```\n\n7. **Show packages in environment:**\n ```bash\n conda list\n # or for specific environment\n conda list -n \n\n # Show only explicitly installed packages\n conda env export --from-history -n \n ```\n\n8. **Search for packages:**\n Ask what packages user wants to install:\n ```bash\n conda search \n conda search --info\n ```\n\n9. **Install packages:**\n\n **Single package:**\n ```bash\n conda install \n # or specify environment\n conda install -n \n ```\n\n **Multiple packages:**\n ```bash\n conda install \n ```\n\n **Specific version:**\n ```bash\n conda install =\n # Example:\n conda install python=3.11\n conda install numpy=1.24.0\n ```\n\n **From specific channel:**\n ```bash\n conda install -c conda-forge \n ```\n\n10. **Suggest common packages by category:**\n\n **Data Science:**\n ```bash\n conda install numpy pandas matplotlib seaborn scikit-learn\n conda install jupyter jupyterlab notebook\n conda install scipy statsmodels\n ```\n\n **Machine Learning:**\n ```bash\n conda install tensorflow pytorch torchvision\n conda install keras scikit-learn xgboost\n conda install -c conda-forge lightgbm\n ```\n\n **Development:**\n ```bash\n conda install ipython black flake8 pytest\n conda install requests beautifulsoup4 selenium\n conda install flask django fastapi\n ```\n\n **Visualization:**\n ```bash\n conda install matplotlib seaborn plotly\n conda install bokeh altair\n ```\n\n **Database:**\n ```bash\n conda install sqlalchemy psycopg2 pymongo\n conda install sqlite\n ```\n\n11. **Update packages:**\n\n **Update specific package:**\n ```bash\n conda update \n ```\n\n **Update all packages in environment:**\n ```bash\n conda update --all\n ```\n\n **Update conda itself:**\n ```bash\n conda update conda\n ```\n\n12. **Remove packages:**\n ```bash\n conda remove \n # or from specific environment\n conda remove -n \n ```\n\n13. **Create new environment:**\n Offer to create a new environment:\n ```bash\n # Basic environment\n conda create -n python=3.11\n\n # With packages\n conda create -n myenv python=3.11 numpy pandas jupyter\n\n # From file\n conda env create -f environment.yml\n ```\n\n14. **Export environment:**\n Help user export environment for sharing:\n\n **Full export (with all dependencies):**\n ```bash\n conda env export -n > environment.yml\n ```\n\n **Only explicitly installed packages:**\n ```bash\n conda env export --from-history -n > environment.yml\n ```\n\n **As requirements.txt:**\n ```bash\n conda list -n --export > requirements.txt\n ```\n\n15. **Clone environment:**\n ```bash\n conda create --name --clone \n ```\n\n16. **Clean up conda:**\n ```bash\n # Remove unused packages and caches\n conda clean --all\n\n # Remove packages cache\n conda clean --packages\n\n # Remove tarballs\n conda clean --tarballs\n\n # Check what would be removed\n conda clean --all --dry-run\n ```\n\n17. **Check environment conflicts:**\n ```bash\n # Check for broken dependencies\n conda info \n\n # Verify environment\n conda env export -n | conda env create -n test-env -f -\n ```\n\n18. **Show environment size:**\n ```bash\n # Size of all environments\n du -sh ~/miniconda3/envs/*\n # or\n du -sh ~/anaconda3/envs/*\n\n # Total conda size\n du -sh ~/miniconda3\n ```\n\n19. **Configure conda:**\n\n **Add channels:**\n ```bash\n conda config --add channels conda-forge\n conda config --add channels bioconda\n ```\n\n **Set channel priority:**\n ```bash\n conda config --set channel_priority strict\n ```\n\n **Show configuration:**\n ```bash\n conda config --show\n conda config --show channels\n ```\n\n **Remove channel:**\n ```bash\n conda config --remove channels \n ```\n\n20. **Use mamba (faster alternative):**\n If user has performance issues:\n ```bash\n # Install mamba\n conda install mamba -n base -c conda-forge\n\n # Use mamba instead of conda\n mamba install \n mamba search \n mamba env create -f environment.yml\n ```\n\n21. **Troubleshooting common issues:**\n\n **Environment not activating:**\n ```bash\n conda init bash\n source ~/.bashrc\n ```\n\n **Package conflicts:**\n ```bash\n # Create new environment instead\n conda create -n new-env python=3.11 \n ```\n\n **Slow package resolution:**\n ```bash\n # Use mamba\n conda install mamba -c conda-forge\n # or\n conda config --set solver libmamba\n ```\n\n **Conda command not found:**\n ```bash\n export PATH=\"$HOME/miniconda3/bin:$PATH\"\n conda init bash\n ```\n\n22. **Best practices to share:**\n - Create separate environment for each project\n - Use environment.yml for reproducibility\n - Pin important package versions\n - Use conda-forge channel for latest packages\n - Regularly clean up with `conda clean --all`\n - Don't install packages in base environment\n - Use mamba for faster package resolution\n - Export environments before major changes\n - Keep Python version explicit in environment\n - Use `--from-history` for cross-platform compatibility\n\n23. **Show workflow example:**\n ```bash\n # Create environment\n conda create -n data-project python=3.11\n\n # Activate it\n conda activate data-project\n\n # Install packages\n conda install numpy pandas jupyter matplotlib scikit-learn\n\n # Verify\n conda list\n\n # Export for sharing\n conda env export --from-history > environment.yml\n\n # Deactivate when done\n conda deactivate\n ```\n\n24. **Integration with Jupyter:**\n ```bash\n # Install ipykernel in environment\n conda activate \n conda install ipykernel\n\n # Register environment as Jupyter kernel\n python -m ipykernel install --user --name=\n\n # Now available in Jupyter\n jupyter lab\n ```\n\n25. **Report current status:**\n Summarize:\n - Number of environments\n - Active environment\n - Total disk usage\n - conda version\n - Suggested actions based on user needs\n\n## Important notes:\n- Always activate environment before installing packages\n- Base environment should be kept minimal\n- Use `-n ` to work with environments without activating\n- conda-forge channel has more packages than default\n- mamba is drop-in replacement, much faster\n- Environment.yml files ensure reproducibility\n- Pin versions in production environments\n- Clean up regularly to save disk space\n- Don't mix pip and conda unless necessary (prefer conda)\n- Use `--from-history` when exporting for other OS\n- Jupyter needs ipykernel in each environment\n- conda init modifies .bashrc - check if needed\n", "filepath": "python/conda/manage-conda-environments.md" }, { "name": "setup-conda-data-analysis", "category": "conda", "description": "Set up conda environment for data analysis", "tags": [ "python", "conda", "data-analysis", "jupyter", "pandas", "project", "gitignored" ], "content": "You are helping the user set up a conda environment for data analysis.\n\n## Process\n\n1. **Create base environment**\n ```bash\n conda create -n data-analysis python=3.11 -y\n conda activate data-analysis\n ```\n\n2. **Install core data analysis libraries**\n ```bash\n conda install -c conda-forge pandas numpy scipy -y\n ```\n\n3. **Install visualization libraries**\n ```bash\n conda install -c conda-forge matplotlib seaborn plotly -y\n pip install altair\n pip install bokeh\n ```\n\n4. **Install Jupyter ecosystem**\n ```bash\n conda install -c conda-forge jupyter jupyterlab ipywidgets -y\n pip install jupyterlab-git\n pip install jupyterlab-lsp\n ```\n\n5. **Install statistical and ML libraries**\n ```bash\n conda install -c conda-forge scikit-learn statsmodels -y\n pip install scipy\n pip install pingouin # Statistics\n ```\n\n6. **Install data processing tools**\n ```bash\n conda install -c conda-forge openpyxl xlrd -y # Excel support\n pip install pyarrow fastparquet # Parquet support\n pip install sqlalchemy # Database connectivity\n pip install beautifulsoup4 # Web scraping\n pip install requests # HTTP requests\n ```\n\n7. **Install data manipulation tools**\n ```bash\n pip install polars # Fast DataFrame library\n pip install dask # Parallel computing\n pip install vaex # Big data processing\n ```\n\n8. **Install database drivers**\n ```bash\n pip install psycopg2-binary # PostgreSQL\n pip install pymongo # MongoDB\n pip install redis # Redis\n ```\n\n9. **Install development tools**\n ```bash\n pip install black # Code formatting\n pip install pylint # Linting\n pip install ipdb # Debugging\n ```\n\n10. **Configure Jupyter extensions**\n - Enable useful extensions\n - Set up theme preferences\n - Configure autosave\n\n11. **Create example notebook**\n - Offer to create `~/notebooks/data-analysis-template.ipynb` with common imports\n\n## Output\n\nProvide a summary showing:\n- Environment name and setup status\n- Installed libraries grouped by category\n- Jupyter Lab configuration\n- Example import statements\n- Suggested workflows\n- Links to documentation", "filepath": "python/conda/setup-conda-data-analysis.md" }, { "name": "setup-conda-llm-finetune", "category": "conda", "description": "Set up conda environment for LLM fine-tuning", "tags": [ "python", "conda", "llm", "fine-tuning", "ai", "development", "project", "gitignored" ], "content": "You are helping the user set up a conda environment for LLM fine-tuning.\n\n## Process\n\n1. **Create base environment**\n ```bash\n conda create -n llm-finetune python=3.11 -y\n conda activate llm-finetune\n ```\n\n2. **Install PyTorch with ROCm**\n ```bash\n pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0\n ```\n\n3. **Install core fine-tuning libraries**\n\n **Hugging Face ecosystem:**\n ```bash\n pip install transformers\n pip install datasets\n pip install accelerate\n pip install evaluate\n pip install peft # Parameter-Efficient Fine-Tuning\n pip install bitsandbytes # Quantization (may need special build for ROCm)\n ```\n\n **Training frameworks:**\n ```bash\n pip install trl # Transformer Reinforcement Learning\n pip install deepspeed # Distributed training (if needed)\n ```\n\n4. **Install quantization and optimization tools**\n ```bash\n pip install optimum\n pip install auto-gptq # GPTQ quantization\n pip install autoawq # AWQ quantization\n ```\n\n5. **Install evaluation and monitoring tools**\n ```bash\n pip install wandb # Weights & Biases for experiment tracking\n pip install tensorboard\n pip install rouge-score # Text evaluation\n pip install sacrebleu # Translation metrics\n ```\n\n6. **Install data processing tools**\n ```bash\n pip install pandas\n pip install numpy\n pip install scipy\n pip install scikit-learn\n pip install nltk\n pip install spacy\n ```\n\n7. **Install specialized fine-tuning tools**\n ```bash\n pip install axolotl # LLM fine-tuning framework\n pip install unsloth # Fast fine-tuning (if compatible with ROCm)\n pip install qlora # Quantized LoRA\n ```\n\n8. **Install Jupyter for interactive work**\n ```bash\n conda install -c conda-forge jupyter jupyterlab ipywidgets -y\n ```\n\n9. **Create example fine-tuning script**\n - Offer to create `~/scripts/llm-finetune-example.py` with basic LoRA setup\n\n10. **Test installation**\n ```python\n import torch\n from transformers import AutoModelForCausalLM, AutoTokenizer\n from peft import LoraConfig, get_peft_model\n\n print(f\"PyTorch: {torch.__version__}\")\n print(f\"GPU available: {torch.cuda.is_available()}\")\n print(\"All libraries imported successfully!\")\n ```\n\n11. **Create resource estimation script**\n - Offer to create script to estimate VRAM needs for different model sizes\n\n12. **Suggest popular models for fine-tuning**\n - Llama 3.2 (3B, 8B)\n - Mistral 7B\n - Qwen 2.5 (7B, 14B)\n - Phi-3 (3.8B)\n\n## Output\n\nProvide a summary showing:\n- Environment name and setup status\n- Installed libraries grouped by purpose\n- GPU detection status\n- VRAM available for training\n- Suggested model sizes for available hardware\n- Example command to start fine-tuning\n- Links to documentation/tutorials", "filepath": "python/conda/setup-conda-llm-finetune.md" }, { "name": "setup-conda-rocm", "category": "conda", "description": "Set up conda environment for ROCm and PyTorch", "tags": [ "python", "conda", "rocm", "pytorch", "ai", "development", "project", "gitignored" ], "content": "You are helping the user set up a conda environment optimized for ROCm and PyTorch.\n\n## Process\n\n1. **Check if conda is installed**\n - Run: `conda --version`\n - If not installed, suggest installing Miniconda or Anaconda\n - Installation: `wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && bash Miniconda3-latest-Linux-x86_64.sh`\n\n2. **Verify ROCm is available on system**\n - Check: `rocminfo`\n - Get ROCm version: `rocminfo | grep \"Name:\" | head -1`\n - Typical ROCm versions: 5.7, 6.0, 6.1\n\n3. **Create conda environment**\n ```bash\n conda create -n rocm-pytorch python=3.11 -y\n conda activate rocm-pytorch\n ```\n\n4. **Install PyTorch with ROCm support**\n - Check compatible PyTorch version at: pytorch.org/get-started/locally/\n - Install based on ROCm version:\n\n ```bash\n # For ROCm 6.0\n pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0\n\n # For ROCm 5.7\n pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7\n ```\n\n5. **Install essential ML libraries**\n ```bash\n conda install -c conda-forge numpy scipy matplotlib jupyter ipython -y\n pip install pandas scikit-learn\n ```\n\n6. **Install deep learning tools**\n ```bash\n pip install transformers accelerate datasets\n pip install tensorboard\n pip install onnx onnxruntime\n ```\n\n7. **Test PyTorch ROCm integration**\n ```python\n import torch\n print(f\"PyTorch version: {torch.__version__}\")\n print(f\"CUDA available: {torch.cuda.is_available()}\") # ROCm uses CUDA API\n if torch.cuda.is_available():\n print(f\"Device name: {torch.cuda.get_device_name(0)}\")\n print(f\"Device count: {torch.cuda.device_count()}\")\n ```\n\n8. **Create activation script**\n - Offer to create `~/scripts/activate-rocm-pytorch.sh`:\n ```bash\n #!/bin/bash\n eval \"$(conda shell.bash hook)\"\n conda activate rocm-pytorch\n echo \"ROCm PyTorch environment activated\"\n python -c \"import torch; print(f'PyTorch: {torch.__version__}, CUDA available: {torch.cuda.is_available()}')\"\n ```\n\n9. **Optional: Install additional tools**\n - Suggest:\n - `timm` - PyTorch image models\n - `torchmetrics` - Metrics\n - `lightning` - PyTorch Lightning\n - `einops` - Tensor operations\n\n## Output\n\nProvide a summary showing:\n- Conda environment name and Python version\n- PyTorch version and ROCm compatibility\n- GPU detection status\n- List of installed packages\n- Test results showing GPU is accessible\n- Activation command for future use", "filepath": "python/conda/setup-conda-rocm.md" }, { "name": "setup-conda-stt-finetune", "category": "conda", "description": "Set up conda environment for speech-to-text fine-tuning", "tags": [ "python", "conda", "stt", "whisper", "speech", "ai", "fine-tuning", "project", "gitignored" ], "content": "You are helping the user set up a conda environment for speech-to-text (STT) fine-tuning.\n\n## Process\n\n1. **Create base environment**\n ```bash\n conda create -n stt-finetune python=3.11 -y\n conda activate stt-finetune\n ```\n\n2. **Install PyTorch with ROCm**\n ```bash\n pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0\n ```\n\n3. **Install Whisper and related libraries**\n ```bash\n pip install openai-whisper\n pip install faster-whisper # Optimized inference\n pip install whisperx # Advanced features\n ```\n\n4. **Install Hugging Face libraries**\n ```bash\n pip install transformers\n pip install datasets\n pip install accelerate\n pip install evaluate\n pip install peft # For LoRA fine-tuning\n ```\n\n5. **Install audio processing libraries**\n ```bash\n pip install librosa # Audio analysis\n pip install soundfile # Audio I/O\n pip install pydub # Audio manipulation\n pip install sox # Audio processing\n conda install -c conda-forge ffmpeg -y # Audio conversion\n ```\n\n6. **Install speech-specific tools**\n ```bash\n pip install jiwer # Word Error Rate calculation\n pip install speechbrain # Speech toolkit\n pip install pyannote.audio # Speaker diarization\n ```\n\n7. **Install data processing tools**\n ```bash\n pip install pandas\n pip install numpy\n pip install scipy\n pip install matplotlib\n pip install seaborn # Visualization\n ```\n\n8. **Install monitoring and experimentation**\n ```bash\n pip install wandb # Experiment tracking\n pip install tensorboard\n ```\n\n9. **Install Jupyter for interactive work**\n ```bash\n conda install -c conda-forge jupyter jupyterlab ipywidgets -y\n ```\n\n10. **Test installation**\n ```python\n import torch\n import whisper\n import librosa\n from transformers import WhisperProcessor, WhisperForConditionalGeneration\n\n print(f\"PyTorch: {torch.__version__}\")\n print(f\"GPU available: {torch.cuda.is_available()}\")\n print(\"All libraries imported successfully!\")\n ```\n\n11. **Suggest common datasets**\n - Common Voice (Mozilla)\n - LibriSpeech\n - TEDLIUM\n - Custom datasets\n\n12. **Create example script**\n - Offer to create `~/scripts/whisper-finetune-example.py` with basic setup\n\n## Output\n\nProvide a summary showing:\n- Environment name and setup status\n- Installed libraries grouped by purpose\n- GPU detection status\n- Available VRAM for training\n- Suggested datasets for fine-tuning\n- Example commands for testing\n- Links to documentation/tutorials", "filepath": "python/conda/setup-conda-stt-finetune.md" } ], "pyenv": [ { "name": "setup-pyenv", "category": "pyenv", "description": "Install pyenv and help user set up various Python versions", "tags": [ "python", "development", "pyenv", "versions", "setup", "project", "gitignored" ], "content": "You are helping the user install pyenv and set up multiple Python versions.\n\n## Process\n\n1. **Check if pyenv is already installed**\n - Run `pyenv --version` to check\n - Check `~/.pyenv` directory\n\n2. **Install pyenv if needed**\n - Install dependencies: `sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev`\n - Clone pyenv: `curl https://pyenv.run | bash`\n - Add to shell config (`~/.bashrc` or `~/.zshrc`):\n ```bash\n export PYENV_ROOT=\"$HOME/.pyenv\"\n export PATH=\"$PYENV_ROOT/bin:$PATH\"\n eval \"$(pyenv init -)\"\n eval \"$(pyenv virtualenv-init -)\"\n ```\n - Reload shell: `source ~/.bashrc`\n\n3. **Check currently installed Python versions**\n - Run `pyenv versions` to see installed versions\n - Run `python --version` to see system Python\n\n4. **Work with user to install desired versions**\n - Ask which Python versions they need\n - Show available versions: `pyenv install --list`\n - Common versions to suggest: 3.11.x, 3.12.x, 3.13.x\n - Install versions: `pyenv install 3.12.7` (example)\n\n5. **Configure Python versions**\n - Set global default: `pyenv global 3.12.7`\n - Set local (directory-specific): `pyenv local 3.11.5`\n - Show how to create virtualenvs: `pyenv virtualenv 3.12.7 myproject`\n\n6. **Verify installation**\n - Check active version: `pyenv version`\n - Test Python: `python --version`\n - Test pip: `pip --version`\n\n## Output\n\nProvide a summary showing:\n- pyenv installation status\n- List of installed Python versions\n- Current global/local version settings\n- Suggestions for useful versions based on user's needs", "filepath": "python/pyenv/setup-pyenv.md" } ], "av": [ { "name": "install-clamav", "category": "av", "description": "", "tags": [], "content": "You are tasked with installing and configuring ClamAV (command-line antivirus) and ClamTK (GUI frontend) on this Linux system if they are not already installed.\n\n## Your Task\n\nSet up a complete antivirus solution using ClamAV with the ClamTK graphical interface for easy management.\n\n## Installation Steps\n\n### 1. Check Current Installation Status\n- Verify if ClamAV is already installed: `dpkg -l | grep clamav`\n- Verify if ClamTK is already installed: `dpkg -l | grep clamtk`\n- If both are installed and working, inform the user and skip to configuration verification\n\n### 2. Install Packages (if needed)\nInstall the following packages using apt:\n- `clamav` - Core antivirus engine\n- `clamav-daemon` - ClamAV daemon for background scanning\n- `clamav-freshclam` - Virus definition updater\n- `clamtk` - Graphical user interface for ClamAV\n\nUse sudo for installation.\n\n### 3. Initial Configuration\n\nAfter installation:\n- Stop the freshclam service: `sudo systemctl stop clamav-freshclam`\n- Update virus definitions manually first: `sudo freshclam`\n- Start the freshclam service: `sudo systemctl start clamav-freshclam`\n- Enable freshclam to start on boot: `sudo systemctl enable clamav-freshclam`\n\n### 4. Configure ClamAV Daemon\n- Start the ClamAV daemon: `sudo systemctl start clamav-daemon`\n- Enable it for automatic startup: `sudo systemctl enable clamav-daemon`\n- Verify daemon is running: `sudo systemctl status clamav-daemon`\n\n### 5. Verify Installation\n- Check ClamAV version: `clamscan --version`\n- Check virus definition database date: `sudo freshclam --version` and verify freshclam status\n- Verify ClamTK launches: Inform user they can test by running `clamtk` from terminal or application menu\n\n### 6. Initial Scan Setup Recommendations\nProvide guidance on:\n- Running a quick test scan: `clamscan -r /home/[username]/Downloads`\n- Setting up scheduled scans via ClamTK\n- Configuring scan exclusions if needed\n- Understanding quarantine location\n\n## Post-Installation Information\n\nProvide the user with:\n- Location of ClamAV logs: `/var/log/clamav/`\n- How to update definitions manually: `sudo freshclam`\n- How to run a full system scan: `sudo clamscan -r /`\n- ClamTK location in application menu (typically under System or Utilities)\n- Recommendation to set up automatic scheduled scans via ClamTK GUI\n\n## Output Format\n\n```\nCLAMAV/CLAMTK INSTALLATION REPORT\n\n=== INSTALLATION STATUS ===\nClamAV: [Installed/Already Present]\nClamAV Daemon: [Running/Status]\nFreshClam: [Running/Status]\nClamTK: [Installed/Already Present]\n\n=== VIRUS DEFINITIONS ===\nLast Updated: [date/time]\nDatabase Version: [version]\nSignatures: [number]\n\n=== SERVICES STATUS ===\nclamav-daemon: [active/inactive]\nclamav-freshclam: [active/inactive]\n\n=== NEXT STEPS ===\n[Recommendations for first scan, scheduled scans, etc.]\n```\n\n## Important Notes\n\n- Use sudo for all installation and system configuration commands\n- Handle cases where packages are already installed gracefully\n- Ensure virus definitions are updated before declaring success\n- Verify services are running and enabled\n- If any step fails, provide clear error messages and troubleshooting steps\n- For Ubuntu/Debian systems, use apt package manager\n- Initial virus definition update may take several minutes - be patient\n", "filepath": "security/av/install-clamav.md" } ], "firewall": [ { "name": "analyze-firewall", "category": "firewall", "description": "", "tags": [], "content": "# Analyze Firewall and Suggest Hardening\n\nYou are helping the user check if a firewall is running, analyze open ports, and suggest potential hardening.\n\n## Your tasks:\n\n1. **Check if a firewall is active:**\n\n **UFW (Uncomplicated Firewall):**\n ```bash\n sudo ufw status verbose\n ```\n\n **iptables (lower level):**\n ```bash\n sudo iptables -L -n -v\n sudo ip6tables -L -n -v\n ```\n\n **firewalld (if used):**\n ```bash\n sudo firewall-cmd --state\n sudo firewall-cmd --list-all\n ```\n\n **nftables (modern replacement for iptables):**\n ```bash\n sudo nft list ruleset\n ```\n\n2. **If no firewall is active, recommend enabling UFW:**\n ```bash\n sudo apt install ufw\n sudo ufw enable\n sudo ufw status\n ```\n\n3. **Check currently listening services:**\n ```bash\n sudo ss -tulpn\n # Or\n sudo netstat -tulpn\n ```\n\n This shows what services are listening on which ports.\n\n4. **Check for open ports from external perspective:**\n ```bash\n sudo nmap -sT -O localhost\n ```\n\n Or install nmap if not available:\n ```bash\n sudo apt install nmap\n ```\n\n5. **Analyze each open port:**\n For each listening port, identify:\n - Which service is using it\n - Whether it should be accessible from network\n - Current firewall rules for it\n\n Common ports to check:\n - 22 (SSH)\n - 80 (HTTP)\n - 443 (HTTPS)\n - 3306 (MySQL)\n - 5432 (PostgreSQL)\n - 6379 (Redis)\n - 27017 (MongoDB)\n - 3389 (RDP)\n - 445 (SMB)\n - 2049 (NFS)\n\n6. **Check UFW rules in detail:**\n ```bash\n sudo ufw status numbered\n sudo ufw show added\n ```\n\n7. **Check iptables rules in detail:**\n ```bash\n sudo iptables -S\n sudo iptables -L INPUT -v -n\n sudo iptables -L OUTPUT -v -n\n sudo iptables -L FORWARD -v -n\n ```\n\n8. **Identify potential security issues:**\n\n **Services listening on 0.0.0.0 (all interfaces):**\n These are accessible from network. Should they be?\n ```bash\n sudo ss -tulpn | grep \"0.0.0.0\"\n ```\n\n **Services that should only be local:**\n Databases, Redis, etc. should typically only listen on 127.0.0.1:\n ```bash\n sudo ss -tulpn | grep -v \"127.0.0.1\"\n ```\n\n **Unnecessary services:**\n Check for services that shouldn't be running:\n ```bash\n sudo systemctl list-units --type=service --state=running | grep -E \"telnet|ftp|rsh\"\n ```\n\n9. **Analyze by service type:**\n\n **SSH (port 22):**\n - Should SSH be accessible from internet?\n - Consider changing default port\n - Check SSH configuration: `cat /etc/ssh/sshd_config | grep -v \"^#\" | grep -v \"^$\"`\n - Verify key-only authentication is enforced\n - Check fail2ban status: `sudo systemctl status fail2ban`\n\n **Web services (80, 443):**\n - Are these intentional?\n - Is there a web server running?\n - Check for default/test pages\n\n **Databases (3306, 5432, 27017, etc.):**\n - Should NEVER be exposed to internet\n - Should listen only on 127.0.0.1\n - Check configuration files\n\n10. **Check for common attack vectors:**\n ```bash\n # Check for services with known vulnerabilities\n sudo ss -tulpn | grep -E \"telnet|ftp|rlogin|rsh|rexec\"\n\n # Check for uncommon high ports\n sudo ss -tulpn | awk '{print $5}' | cut -d: -f2 | sort -n | uniq\n ```\n\n11. **Suggest hardening measures:**\n\n **Enable UFW if not active:**\n ```bash\n sudo ufw default deny incoming\n sudo ufw default allow outgoing\n sudo ufw enable\n ```\n\n **For SSH access:**\n ```bash\n sudo ufw allow 22/tcp comment 'SSH'\n # Or from specific IP:\n sudo ufw allow from to any port 22 comment 'SSH from specific IP'\n ```\n\n **For web server:**\n ```bash\n sudo ufw allow 80/tcp comment 'HTTP'\n sudo ufw allow 443/tcp comment 'HTTPS'\n ```\n\n **For local network only:**\n ```bash\n sudo ufw allow from 192.168.1.0/24 comment 'Local network'\n ```\n\n12. **Install and configure fail2ban (recommended):**\n ```bash\n sudo apt install fail2ban\n sudo systemctl enable fail2ban\n sudo systemctl start fail2ban\n sudo fail2ban-client status\n sudo fail2ban-client status sshd\n ```\n\n13. **Check for IPv6 exposure:**\n ```bash\n sudo ss -tulpn6\n sudo ufw status\n ```\n\n Ensure IPv6 is also protected:\n ```bash\n sudo ufw default deny incoming\n # UFW handles both IPv4 and IPv6\n ```\n\n14. **Advanced iptables hardening (if using iptables):**\n\n **Drop invalid packets:**\n ```bash\n sudo iptables -A INPUT -m conntrack --ctstate INVALID -j DROP\n ```\n\n **Rate limit SSH:**\n ```bash\n sudo iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --set\n sudo iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 4 -j DROP\n ```\n\n **Log dropped packets:**\n ```bash\n sudo iptables -A INPUT -j LOG --log-prefix \"iptables-dropped: \"\n ```\n\n15. **Check for Docker interference:**\n Docker manipulates iptables directly, which can bypass UFW:\n ```bash\n sudo iptables -L DOCKER -n\n ```\n\n To prevent Docker from bypassing UFW, edit `/etc/docker/daemon.json`:\n ```json\n {\n \"iptables\": false\n }\n ```\n\n Or use firewalld instead for better Docker integration.\n\n16. **Check connection tracking:**\n ```bash\n sudo conntrack -L\n cat /proc/sys/net/netfilter/nf_conntrack_count\n cat /proc/sys/net/netfilter/nf_conntrack_max\n ```\n\n17. **Review logging:**\n ```bash\n sudo grep UFW /var/log/syslog | tail -20\n sudo tail -20 /var/log/ufw.log\n ```\n\n18. **Generate hardening recommendations:**\n Based on findings, suggest:\n - Enable firewall if not active\n - Block unnecessary ports\n - Restrict services to local interface only\n - Install fail2ban for brute-force protection\n - Change SSH port (optional, security through obscurity)\n - Disable root SSH login\n - Use key-based SSH authentication only\n - Close database ports from external access\n - Remove unnecessary services\n - Enable connection rate limiting\n - Set up intrusion detection (OSSEC, Snort)\n - Regular security updates\n - Monitor logs regularly\n\n19. **Provide firewall management commands:**\n\n **UFW:**\n - `sudo ufw status` - Check status\n - `sudo ufw enable` - Enable firewall\n - `sudo ufw disable` - Disable firewall\n - `sudo ufw allow ` - Allow port\n - `sudo ufw deny ` - Deny port\n - `sudo ufw delete ` - Delete rule\n - `sudo ufw reset` - Reset to default\n - `sudo ufw logging on` - Enable logging\n\n **iptables:**\n - `sudo iptables -L` - List rules\n - `sudo iptables -A INPUT -p tcp --dport -j ACCEPT` - Allow port\n - `sudo iptables -D INPUT ` - Delete rule\n - `sudo iptables-save > /etc/iptables/rules.v4` - Save rules\n - `sudo iptables-restore < /etc/iptables/rules.v4` - Restore rules\n\n20. **Report findings:**\n Summarize:\n - Firewall status (active/inactive)\n - List of open ports\n - Services listening on each port\n - Current firewall rules\n - Security issues found\n - Recommended hardening measures\n - Priority actions (critical vs. nice-to-have)\n\n## Important notes:\n- Test firewall rules carefully to avoid locking yourself out\n- Always have a backup access method (console/KVM) before changing SSH rules\n- UFW and iptables can conflict - use one or the other\n- Docker can bypass UFW - special configuration needed\n- Deny incoming by default, allow specific services\n- Keep logs for intrusion detection\n- Regularly review and update firewall rules\n- Consider using VPN for remote access instead of exposing services\n- fail2ban is essential for SSH protection\n- Don't expose databases to the internet\n", "filepath": "security/firewall/analyze-firewall.md" } ], "posture-diagnostics": [ { "name": "security-posture-check", "category": "posture-diagnostics", "description": "", "tags": [], "content": "You are conducting a comprehensive security posture evaluation for this Linux desktop system.\n\n## Your Task\n\nPerform a thorough security assessment of the system and provide a detailed report with actionable recommendations.\n\n## Assessment Areas\n\n### 1. Firewall Status\n- Check if UFW (Uncomplicated Firewall) or iptables is active\n- Review firewall rules and policies\n- Identify any concerning open ports\n\n### 2. System Updates\n- Check for available security updates\n- Verify automatic update configuration\n- Review update history for critical patches\n\n### 3. User Account Security\n- List user accounts and their privileges\n- Check for accounts with sudo access\n- Identify any accounts without passwords or weak configurations\n- Review SSH key configurations\n\n### 4. SSH Security\n- Check if SSH is running\n- Review SSH configuration (`/etc/ssh/sshd_config`)\n- Verify key-based authentication settings\n- Check for root login permission\n- Review allowed authentication methods\n\n### 5. Running Services\n- List all active services\n- Identify unnecessary services that could be disabled\n- Check for services listening on external interfaces\n\n### 6. File Permissions\n- Check critical system files (`/etc/passwd`, `/etc/shadow`, `/etc/sudoers`)\n- Review permissions on home directories\n- Identify world-writable files in system directories\n\n### 7. Antivirus/Malware Protection\n- Check if ClamAV or other antivirus is installed\n- Verify if definitions are up to date\n- Check recent scan history\n\n### 8. Security Packages\n- Verify installation of: fail2ban, apparmor, aide, rkhunter, lynis\n- Check their configuration and status\n\n### 9. Network Security\n- Review listening ports and services\n- Check for unusual network connections\n- Verify network configuration security\n\n### 10. Audit Logs\n- Check if auditd is running\n- Review recent authentication logs\n- Look for failed login attempts\n- Check for suspicious sudo usage\n\n## Output Format\n\nProvide your findings in the following structured format:\n\n```\nSECURITY POSTURE ASSESSMENT\nGenerated: [timestamp]\n\n=== SUMMARY ===\nOverall Security Level: [Critical/Poor/Fair/Good/Excellent]\nCritical Issues Found: [number]\nWarnings: [number]\nRecommendations: [number]\n\n=== CRITICAL ISSUES ===\n[List any critical security problems that need immediate attention]\n\n=== WARNINGS ===\n[List security concerns that should be addressed]\n\n=== CURRENT PROTECTIONS ===\n[List active security measures in place]\n\n=== RECOMMENDATIONS ===\n[Prioritized list of security improvements]\n\n=== DETAILED FINDINGS ===\n[Detailed breakdown by assessment area]\n```\n\n## Important Notes\n\n- Use sudo when necessary to access system files and configurations\n- Be thorough but focus on actionable findings\n- Prioritize issues by severity\n- Provide specific commands for remediation where applicable\n- Consider the desktop/workstation context (not a server)\n", "filepath": "security/posture-diagnostics/security-posture-check.md" } ], "health-checks": [ { "name": "btrfs-snapper-health", "category": "health-checks", "description": "", "tags": [], "content": "# BTRFS and Snapper Snapshot Health Check\n\nYou are helping the user check their BTRFS filesystem configuration and Snapper snapshot setup.\n\n## Your tasks:\n\n1. **Check if BTRFS is in use:**\n - Run `df -T` to identify BTRFS filesystems\n - Run `sudo btrfs filesystem show` to display all BTRFS filesystems\n - Run `mount | grep btrfs` to see mounted BTRFS filesystems with their options\n\n2. **Check BTRFS filesystem health:**\n - For each BTRFS filesystem found, run `sudo btrfs filesystem usage `\n - Run `sudo btrfs device stats ` to check for device errors\n - Run `sudo btrfs scrub status ` to check scrub status\n\n3. **Check Snapper configuration:**\n - Check if Snapper is installed: `which snapper`\n - If not installed, ask the user if they want to install it\n - List Snapper configurations: `sudo snapper list-configs`\n - For each configuration, show snapshots: `sudo snapper -c list`\n - Show Snapper configuration details: `sudo snapper -c get-config`\n\n4. **Analyze snapshot usage:**\n - Check disk space used by snapshots\n - Identify if there are too many snapshots that should be cleaned up\n - Check automatic snapshot policies\n\n5. **Report findings:**\n - Summarize BTRFS health status\n - Report on snapshot configurations and disk usage\n - Provide recommendations for:\n - Snapshot retention policies if too many snapshots exist\n - Running scrub if it hasn't been run recently\n - Fixing any errors or issues detected\n - Setting up Snapper if BTRFS is in use but Snapper is not configured\n\n## Important notes:\n- Use sudo for all BTRFS and Snapper commands\n- Be clear about what you find and what actions you recommend\n- If BTRFS is not in use, inform the user and exit gracefully\n", "filepath": "storage/health-checks/btrfs-snapper-health.md" }, { "name": "check-drive-health", "category": "health-checks", "description": "", "tags": [], "content": "# Hard Drive Health Check\n\nYou are helping the user run comprehensive health checks on all storage drives (SSD, HDD, NVMe, or mixed configurations).\n\n## Your tasks:\n\n1. **Identify all storage devices:**\n - List all block devices: `lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,MODEL,TRAN`\n - Get detailed disk information: `sudo lshw -class disk -short`\n - Identify device types (NVMe, SATA SSD, SATA HDD, etc.)\n\n2. **Check SMART status for each device:**\n\n **For SATA/SAS drives (SSD and HDD):**\n - Check if smartmontools is installed: `which smartctl`\n - If not installed, ask user if they want to install it: `sudo apt install smartmontools`\n - For each drive, run:\n - `sudo smartctl -i /dev/sdX` (device info)\n - `sudo smartctl -H /dev/sdX` (health status)\n - `sudo smartctl -A /dev/sdX` (attributes)\n - `sudo smartctl -l error /dev/sdX` (error log)\n\n **For NVMe drives:**\n - Check NVMe tools: `which nvme`\n - For each NVMe drive, run:\n - `sudo nvme list`\n - `sudo nvme smart-log /dev/nvmeXn1`\n - `sudo smartctl -a /dev/nvmeXn1` (if smartctl supports NVMe)\n\n3. **Analyze drive health indicators:**\n\n **For SSDs:**\n - Wear leveling count\n - Media wearout indicator\n - Available spare capacity\n - Percentage used\n - Total bytes written\n - Power-on hours\n - Reallocated sectors\n\n **For HDDs:**\n - Reallocated sector count\n - Current pending sectors\n - Offline uncorrectable sectors\n - Spin retry count\n - Power-on hours\n - Temperature\n - UDMA CRC errors\n\n **For NVMe:**\n - Critical warning\n - Temperature\n - Available spare\n - Percentage used\n - Data units read/written\n - Power cycles\n - Unsafe shutdowns\n\n4. **Check filesystem health:**\n - Review dmesg for disk errors: `sudo dmesg | grep -i \"error\\|fail\" | grep -i \"sd\\|nvme\"`\n - Check system logs: `sudo journalctl -p err -g \"sd\\|nvme\" --since \"7 days ago\"`\n\n5. **Report findings:**\n - Summarize each drive's health status\n - Highlight any concerning indicators:\n - High reallocated sectors\n - High wear level on SSDs\n - Temperature issues\n - Errors in logs\n - Pending sectors\n - Calculate estimated remaining lifespan for SSDs based on wear indicators\n - Provide recommendations:\n - Drives that should be replaced soon\n - Drives that need monitoring\n - Whether to enable SMART monitoring if not active\n - Backup recommendations if drives show signs of failure\n\n## Important notes:\n- Use sudo for all SMART commands\n- Be clear about the severity of any issues found\n- Distinguish between informational metrics and critical warnings\n- If smartmontools is not installed, offer to install it\n- Some drives may not support all SMART features - this is normal\n", "filepath": "storage/health-checks/check-drive-health.md" } ], "network-mounts": [ { "name": "setup-nfs-mounts", "category": "network-mounts", "description": "", "tags": [], "content": "# NFS Mount Setup Assistant\n\nYou are helping the user set up NFS (Network File System) mounts to remote systems.\n\n## Your tasks:\n\n1. **Check NFS client prerequisites:**\n - Check if NFS client utilities are installed: `dpkg -l | grep nfs-common`\n - If not installed:\n ```bash\n sudo apt update\n sudo apt install nfs-common\n ```\n\n2. **Gather mount information from the user:**\n Ask the user for:\n - Remote NFS server IP or hostname (e.g., `10.0.0.100`)\n - Remote export path (e.g., `/srv/nfs/share`)\n - Local mount point (e.g., `/mnt/nfs/remote-share`)\n - Mount options preferences (default is usually fine, but ask if they need specific options)\n\n3. **Test NFS server accessibility:**\n - Check if remote server is reachable: `ping -c 3 `\n - List available NFS exports from the remote server:\n ```bash\n showmount -e \n ```\n - If this fails, troubleshoot:\n - Check if NFS ports are open (2049, 111)\n - Verify firewall settings\n\n4. **Create local mount point:**\n ```bash\n sudo mkdir -p \n ```\n\n5. **Test mount temporarily:**\n Before making it permanent, test the mount:\n ```bash\n sudo mount -t nfs : \n ```\n\n Verify the mount:\n ```bash\n df -h | grep \n ls -la \n ```\n\n6. **Configure mount options:**\n Discuss common NFS mount options with the user:\n - `rw` / `ro` - Read-write or read-only\n - `hard` / `soft` - Hard mount (recommended) or soft mount\n - `intr` - Allow interruption of NFS requests\n - `noatime` - Don't update access times (performance)\n - `vers=4` - Force NFSv4 (recommended)\n - `timeo=14` - Timeout value\n - `retrans=3` - Number of retransmits\n - `_netdev` - Required for network filesystems\n - `nofail` - Don't fail boot if mount unavailable\n\n Recommended default options:\n ```\n rw,hard,intr,vers=4,_netdev,nofail\n ```\n\n7. **Make mount permanent via /etc/fstab:**\n - Backup current fstab:\n ```bash\n sudo cp /etc/fstab /etc/fstab.backup.$(date +%Y%m%d_%H%M%S)\n ```\n\n - Add entry to /etc/fstab:\n ```\n : nfs 0 0\n ```\n\n - Test fstab entry without rebooting:\n ```bash\n sudo umount \n sudo mount -a\n df -h | grep \n ```\n\n8. **Set up automount with systemd (alternative to fstab):**\n If the user prefers automount, create systemd mount units:\n\n Create `/etc/systemd/system/mnt-nfs-remote\\x2dshare.mount`:\n ```\n [Unit]\n Description=NFS Mount for remote-share\n After=network-online.target\n Wants=network-online.target\n\n [Mount]\n What=:\n Where=\n Type=nfs\n Options=\n\n [Install]\n WantedBy=multi-user.target\n ```\n\n Enable and start:\n ```bash\n sudo systemctl daemon-reload\n sudo systemctl enable mnt-nfs-remote\\\\x2dshare.mount\n sudo systemctl start mnt-nfs-remote\\\\x2dshare.mount\n sudo systemctl status mnt-nfs-remote\\\\x2dshare.mount\n ```\n\n9. **Configure permissions:**\n Check and configure local mount point permissions:\n ```bash\n ls -la \n ```\n\n If needed, adjust ownership:\n ```bash\n sudo chown : \n ```\n\n10. **Test and verify:**\n - Create a test file:\n ```bash\n touch /test-file\n ls -la /test-file\n ```\n - Check from remote server if possible\n - Verify mount survives reboot (ask user to test)\n\n11. **Troubleshooting guidance:**\n If issues occur, check:\n - Network connectivity: `ping `\n - NFS service on remote: `showmount -e `\n - Firewall rules on both client and server\n - SELinux/AppArmor policies (if applicable)\n - NFS server exports configuration (`/etc/exports` on server)\n - Mount logs: `sudo journalctl -u ` or `dmesg | grep nfs`\n\n12. **Provide best practices:**\n - Use NFSv4 when possible (better performance and security)\n - Use `_netdev` option for network mounts\n - Use `nofail` to prevent boot issues if NFS server is down\n - Consider using autofs for on-demand mounting\n - Document all NFS mounts (keep a list of what's mounted where)\n - Regular monitoring of NFS mount health\n\n## Important notes:\n- Always backup /etc/fstab before editing\n- Test mounts before making them permanent\n- Use `_netdev` and `nofail` options to prevent boot issues\n- Systemd mount units need escaped names (replace / with \\x2d)\n- Ensure NFS server has proper export permissions configured\n", "filepath": "storage/network-mounts/setup-nfs-mounts.md" }, { "name": "setup-smb-mounts", "category": "network-mounts", "description": "", "tags": [], "content": "# SMB/CIFS Mount Setup Assistant\n\nYou are helping the user set up SMB/CIFS (Windows/Samba) mounts to remote systems.\n\n## Your tasks:\n\n1. **Check SMB client prerequisites:**\n - Check if CIFS utilities are installed: `dpkg -l | grep cifs-utils`\n - If not installed:\n ```bash\n sudo apt update\n sudo apt install cifs-utils\n ```\n\n2. **Gather mount information from the user:**\n Ask the user for:\n - Remote SMB server IP or hostname (e.g., `10.0.0.100` or `nas.local`)\n - Share name (e.g., `shared` or `documents`)\n - Username for authentication\n - Domain (if applicable, otherwise use `WORKGROUP`)\n - Local mount point (e.g., `/mnt/smb/remote-share`)\n - Whether they want to store credentials securely\n\n3. **Test SMB server accessibility:**\n - Check if remote server is reachable: `ping -c 3 `\n - List available shares (if credentials are available):\n ```bash\n smbclient -L // -U \n ```\n - If this fails, troubleshoot:\n - Check if SMB ports are open (445, 139)\n - Verify firewall settings\n\n4. **Set up credentials file (recommended for security):**\n Create a credentials file to avoid storing passwords in /etc/fstab:\n\n ```bash\n sudo mkdir -p /etc/samba/credentials\n sudo touch /etc/samba/credentials/\n sudo chmod 700 /etc/samba/credentials\n sudo chmod 600 /etc/samba/credentials/\n ```\n\n Edit the credentials file:\n ```\n username=\n password=\n domain=\n ```\n\n Secure it:\n ```bash\n sudo chown root:root /etc/samba/credentials/\n sudo chmod 600 /etc/samba/credentials/\n ```\n\n5. **Create local mount point:**\n ```bash\n sudo mkdir -p \n ```\n\n6. **Test mount temporarily:**\n Before making it permanent, test the mount:\n ```bash\n sudo mount -t cifs /// \\\n -o credentials=/etc/samba/credentials/,uid=$(id -u),gid=$(id -g)\n ```\n\n Verify the mount:\n ```bash\n df -h | grep \n ls -la \n ```\n\n7. **Configure mount options:**\n Discuss common CIFS mount options with the user:\n - `credentials=` - Use credentials file\n - `uid=` - Set file owner (use `id -u`)\n - `gid=` - Set file group (use `id -g`)\n - `file_mode=0644` - File permissions\n - `dir_mode=0755` - Directory permissions\n - `vers=3.0` - SMB protocol version (2.0, 2.1, 3.0, 3.1.1)\n - `iocharset=utf8` - Character set\n - `_netdev` - Required for network filesystems\n - `nofail` - Don't fail boot if mount unavailable\n - `noauto` - Don't mount automatically (use with autofs)\n - `rw` / `ro` - Read-write or read-only\n\n Recommended default options:\n ```\n credentials=/etc/samba/credentials/,uid=,gid=,file_mode=0644,dir_mode=0755,vers=3.0,iocharset=utf8,_netdev,nofail\n ```\n\n8. **Detect SMB version:**\n Help determine the best SMB version to use:\n ```bash\n smbclient -L // -U --option='client max protocol=SMB3'\n ```\n\n Common versions:\n - SMB 1.0 - Legacy, insecure (avoid)\n - SMB 2.0 - Windows Vista/Server 2008\n - SMB 2.1 - Windows 7/Server 2008 R2\n - SMB 3.0 - Windows 8/Server 2012\n - SMB 3.1.1 - Windows 10/Server 2016+ (recommended)\n\n9. **Make mount permanent via /etc/fstab:**\n - Backup current fstab:\n ```bash\n sudo cp /etc/fstab /etc/fstab.backup.$(date +%Y%m%d_%H%M%S)\n ```\n\n - Add entry to /etc/fstab:\n ```\n /// cifs 0 0\n ```\n\n - Test fstab entry without rebooting:\n ```bash\n sudo umount \n sudo mount -a\n df -h | grep \n ```\n\n10. **Set up automount with systemd (alternative to fstab):**\n If the user prefers automount, create systemd mount units:\n\n Create `/etc/systemd/system/mnt-smb-remote\\x2dshare.mount`:\n ```\n [Unit]\n Description=SMB Mount for remote-share\n After=network-online.target\n Wants=network-online.target\n\n [Mount]\n What=///\n Where=\n Type=cifs\n Options=\n\n [Install]\n WantedBy=multi-user.target\n ```\n\n Enable and start:\n ```bash\n sudo systemctl daemon-reload\n sudo systemctl enable mnt-smb-remote\\\\x2dshare.mount\n sudo systemctl start mnt-smb-remote\\\\x2dshare.mount\n sudo systemctl status mnt-smb-remote\\\\x2dshare.mount\n ```\n\n11. **Configure for Windows Active Directory (if applicable):**\n If connecting to AD domain:\n - May need to install additional packages:\n ```bash\n sudo apt install krb5-user\n ```\n - Use domain credentials in credentials file\n - May need to configure Kerberos (`/etc/krb5.conf`)\n - Use `sec=krb5` option if Kerberos is configured\n\n12. **Test and verify:**\n - Create a test file:\n ```bash\n touch /test-file\n ls -la /test-file\n ```\n - Check permissions and ownership\n - Verify mount survives reboot (ask user to test)\n\n13. **Troubleshooting guidance:**\n If issues occur, check:\n - Network connectivity: `ping `\n - SMB service on remote: `smbclient -L // -N` (null session)\n - Firewall rules on both client and server\n - SMB version compatibility: try different `vers=` options\n - Credentials: test with `smbclient /// -U `\n - Mount logs: `sudo journalctl -u ` or `dmesg | grep cifs`\n - Permissions issues: check `uid`, `gid`, `file_mode`, `dir_mode`\n - Check kernel logs: `dmesg | tail -20`\n\n14. **Provide best practices:**\n - Store credentials in `/etc/samba/credentials/` with 600 permissions\n - Use SMB 3.0+ when possible (better security and performance)\n - Use `_netdev` and `nofail` options to prevent boot issues\n - Set appropriate `uid` and `gid` for file access\n - Avoid SMB 1.0 (deprecated and insecure)\n - Consider using autofs for on-demand mounting\n - Document all SMB mounts\n - Regular monitoring of SMB mount health\n - Keep credentials files secure (root ownership, 600 permissions)\n\n## Important notes:\n- Always backup /etc/fstab before editing\n- Never store passwords directly in /etc/fstab\n- Use credentials files with proper permissions (600, root:root)\n- Test mounts before making them permanent\n- Use `_netdev` and `nofail` options to prevent boot issues\n- Systemd mount units need escaped names (replace / with \\x2d)\n- SMB 1.0 is deprecated and should be avoided\n", "filepath": "storage/network-mounts/setup-smb-mounts.md" } ], "raid": [ { "name": "check-raid-config", "category": "raid", "description": "", "tags": [], "content": "# RAID Configuration Check\n\nYou are helping the user identify and analyze their RAID configuration (software or hardware).\n\n## Your tasks:\n\n1. **Detect RAID type:**\n\n **Software RAID (mdadm):**\n - Check if mdadm is installed: `which mdadm`\n - List all MD devices: `cat /proc/mdstat`\n - Get detailed info for each array: `sudo mdadm --detail /dev/md*`\n - Check mdadm configuration: `cat /etc/mdadm/mdadm.conf` (if exists)\n\n **LVM RAID:**\n - Check for LVM: `sudo pvs`, `sudo vgs`, `sudo lvs`\n - Check for RAID logical volumes: `sudo lvs -a -o +devices,segtype`\n\n **Hardware RAID:**\n - Check for common hardware RAID controllers:\n - MegaRAID: `sudo lspci | grep -i raid` and `which megacli` or `which storcli`\n - HP Smart Array: `which hpacucli` or `which ssacli`\n - Adaptec: `which arcconf`\n - List block devices: `lsblk` and `sudo lshw -class disk -class storage`\n\n **ZFS (if applicable):**\n - Check if ZFS is installed: `which zfs`\n - List ZFS pools: `sudo zpool status`\n - List ZFS datasets: `sudo zfs list`\n\n2. **Analyze RAID health:**\n - For software RAID: check array status, degraded arrays, sync status\n - For hardware RAID: if tools are available, check controller and disk status\n - Check for any failed or missing drives\n - Review disk errors: `sudo smartctl -a /dev/sd*` for member disks\n\n3. **Report configuration details:**\n - RAID level (RAID 0, 1, 5, 6, 10, etc.)\n - Number of devices in each array\n - Total capacity and usable capacity\n - Current status (clean, active, degraded, rebuilding, etc.)\n - Performance configuration (chunk size, stripe size)\n\n4. **Provide recommendations:**\n - If arrays are degraded, suggest immediate action\n - If no monitoring is configured, suggest setting up monitoring\n - If hardware RAID tools are missing, suggest installation\n - Best practices for the detected configuration\n\n## Important notes:\n- Use sudo for all RAID-related commands\n- If no RAID is detected, clearly state \"No RAID configuration found\"\n- Be specific about what type of RAID is in use\n- Highlight any critical issues requiring immediate attention\n", "filepath": "storage/raid/check-raid-config.md" } ] }; // Index page functionality function filterCategories() { const searchTerm = document.getElementById('searchInput').value.toLowerCase(); const cards = document.querySelectorAll('.category-card'); cards.forEach(card => { const categoryName = card.querySelector('h3').textContent.toLowerCase(); if (categoryName.includes(searchTerm)) { card.classList.remove('hidden'); } else { card.classList.add('hidden'); } }); } // Category page functionality function loadCategoryPage() { const urlParams = new URLSearchParams(window.location.search); const category = urlParams.get('cat'); if (!category || !commandsData[category]) { window.location.href = 'index.html'; return; } const categoryTitle = document.getElementById('categoryTitle'); const commandsList = document.getElementById('commandsList'); categoryTitle.textContent = category.replace(/-/g, ' ').replace(/_/g, ' ').replace(/\b\w/g, l => l.toUpperCase()) + ' Commands'; const commands = commandsData[category]; commandsList.innerHTML = commands.map((cmd, index) => `
/${cmd.name}
${cmd.description}
${cmd.tags && cmd.tags.length ? `
${cmd.tags.map(tag => `${tag}`).join('')}
` : ''}
${formatMarkdown(cmd.content)}
`).join(''); } function toggleCommand(index) { const content = document.getElementById(`content-${index}`); const header = content.previousElementSibling; content.classList.toggle('expanded'); header.classList.toggle('expanded'); } function copyCommand(event, commandName, button) { event.stopPropagation(); navigator.clipboard.writeText(commandName).then(() => { const originalText = button.textContent; button.textContent = 'Copied!'; button.classList.add('copied'); setTimeout(() => { button.textContent = originalText; button.classList.remove('copied'); }, 2000); }); } function filterCommands() { const searchTerm = document.getElementById('searchInput').value.toLowerCase(); const cards = document.querySelectorAll('.command-card'); cards.forEach(card => { const commandName = card.querySelector('.command-name').textContent.toLowerCase(); const commandDesc = card.querySelector('.command-description').textContent.toLowerCase(); if (commandName.includes(searchTerm) || commandDesc.includes(searchTerm)) { card.classList.remove('hidden'); } else { card.classList.add('hidden'); } }); } function formatMarkdown(text) { // Simple markdown formatting let html = text; // Code blocks html = html.replace(/```([\s\S]*?)```/g, '
$1
'); // Inline code html = html.replace(/`([^`]+)`/g, '$1'); // Bold html = html.replace(/\*\*([^*]+)\*\*/g, '$1'); // Headers html = html.replace(/^### (.+)$/gm, '

$1

'); html = html.replace(/^## (.+)$/gm, '

$1

'); // Bullet lists html = html.replace(/^\s*[-*] (.+)$/gm, '
  • $1
  • '); html = html.replace(/(
  • .*<\/li>)/s, '
      $1
    '); // Paragraphs html = html.replace(/^(?!<[hup]|$1

    '); return html; } // Initialize the appropriate page if (window.location.pathname.includes('category.html')) { document.addEventListener('DOMContentLoaded', loadCategoryPage); }