Recent Content
Terraform Modules with Cato: Simplifying and Scaling Network Deployments
In this video, we introduce Terraform Modules with Cato and show how they simplify, standardize, and scale Cato deployments. You’ll learn how Terraform modules help you: Combine multiple Cato resources into reusable building blocks Standardize corporate firewall rules and remote user configurations Reduce Terraform code by packaging common Cato use cases into modules This session is ideal for engineers looking to manage Cato environments more efficiently using Infrastructure as Code (IaC), whether you’re just getting started with Terraform or looking to scale existing deployments. References: Cato Terraform Registry
0likes0CommentsCato SDK 101: Introduction & Building Your First Queries
Welcome to your first look at the Cato SDK 🚀 In this video, we introduce the SDK, walk through setup, and guide you through building your first real queries. Perfect for developers, SEs, analysts, or anyone starting with the platform. What you’ll learn: What the Cato SDK is and how it works How to install and authenticate the SDK The structure of clients, queries, and models How to build and run your first queries ⚡ Common mistakes to avoid Where to find docs and next steps 📚
0likes0CommentsMastering Cato Go SDK Queries: A Practical Guide for Developers
Learn how to leverage the Cato Go SDK to query data programmatically and build powerful automations around the Cato SASE platform. This session walks through: Initializing the SDK Authenticating securely Performing real-world queries, and Interpreting responses. Whether you're building internal tools, integrations, or custom workflows, this video gives you the foundation you need to work confidently with the Go SDK.
0likes0CommentsDynamic Resources with Cato & Terraform: Automate and Scale Your Infrastructure
Take your Cato Networks automation to the next level with Terraform! In this video, we dive deep into how to dynamically manage and update your Cato resources using Terraform: enabling faster, scalable, and automated infrastructure management. What you’ll learn: How to update Cato resources dynamically; Sites, Hosts, Domains, and Groups Using bulk provisioning to quickly build and modify multiple Cato objects How Terraform and Cato work together to create a responsive, adaptive security environment Practical examples of Terraform configuration, authentication, and resource updates By the end, you’ll know how to integrate Terraform into your Cato environment to automate routine changes and respond dynamically to your organization’s evolving needs. Perfect for: Network engineers, DevOps professionals, and IT admins seeking to simplify large-scale configuration management with Infrastructure as Code (IaC). Resources Mentioned: Cato Terraform Provider Docs Terraform Docs
0likes0CommentsIntro to Terraform & Cato: Setup, Authentication, and Your First Terraform Apply
Ready to automate your Cato Networks setup with Terraform? In this video, we’ll walk you through everything you need to start using Terraform with Cato, from initial setup to your very first infrastructure deployment. What you’ll learn: How to install and configure Terraform Setting up authentication between Terraform and Cato Networks Understanding the Cato Terraform Provider Running your first Terraform apply to deploy real configurations By the end of this session, you’ll have a working Terraform environment integrated with Cato — ready to manage network resources as code! Perfect for: Network engineers, DevOps professionals, and Cato administrators looking to bring Infrastructure as Code (IaC) into their workflow. Resources Mentioned: Terraform Docs Cato Terraform Provider Docs
0likes0CommentsGreenfield/Brownfield Deployments for Cato Network Sites
Have you ever found yourself managing dozens or even hundreds of Cato Network sites manually through the Cato Management Application (CMA), wishing there was a better way to maintain consistency, version control, and automate? Cato Brownfield Deployments (or Day 2 Operations) solves exactly this problem by enabling you to bring your existing Cato infrastructure under Terraform management without recreating everything from scratch. This comprehensive guide will walk you through the process of exporting existing Cato Network site configurations, modifying them as needed, and importing them into Terraform state for infrastructure-as-code (IaC) management. Why This Matters Version Control: Track all infrastructure changes in Git Consistency: Ensure standardized configurations across all sites Automation: Enable CI/CD pipelines for network infrastructure Disaster Recovery: Quick restoration from configuration backups Bulk Updates: Modify multiple sites simultaneously with confidence What is a Cato Brownfield Deployment? In infrastructure terminology: Greenfield Deployment: Building infrastructure from scratch with no existing resources Brownfield Deployment: Managing and updating existing infrastructure that's already running in production, in this case, sites that are already configured in the Cato Management Application (CMA). NOTE: Bulk export and import of sites for brownfield deployments apply to physical socket site deployments (X1500, X1600, X1600_LTE, X1700), as virtual socket sites for cloud deployments include separate cloud resources that are covered by terraform modules found here. For Cato Networks, a brownfield deployment means: You already have Socket sites, network interfaces, and network ranges configured in the CMA You want to start to manage, or take over the configuration of these existing resources using Terraform You don't want to delete and recreate everything (which would cause network downtime) You need to import existing configurations into Terraform state The socket-bulk-sites Terraform module, combined with the Cato CLI (catocli), makes this process straightforward and safe. Prerequisites Before starting, ensure you have the following installed on your machine: Install Terraform Install Python Install Cato CLI Install Git (optional) NOTE: It is a best practice to use a version control system to track changes in code, and configuration files, this example highlights how to use the git cli client, and github to do so. Validate Required Tools # Python 3.6 or later python3 --version # Terraform 0.13 or later terraform --version # Cato CLI tool pip3 install catocli # Git (recommended for version control) git --version Pro Tip Add the following to your ~/.bashrc or ~/.zshrc file to use aliases making it easier to manage running the various terraform commands: cat >> ~/.bashrc << 'EOF' alias tf='terraform' alias tfap='terraform apply --auto-approve' alias tfapp='terraform apply --auto-approve -parallelism=1' alias tfdap='terraform destroy --auto-approve' alias tfdapp='terraform destroy --auto-approve -parallelism=1' alias tfclear='rm -rf .terraform* && rm terraform.tfstate*' alias tffmt="tf fmt -recursive" EOF source ~/.bashrc Cato API Credentials You'll need: API Token: Generated from the Cato Management Application. Refer to Generating API Keys for the Cato API. NOTE: Save the token securely (you won't be able to view it again). Account ID: Your Cato account number found in Account > Account Info or in the CMA URL, example: https://system.cc.catonetworks.com/#/account/{account_id}/ Cato Brownfield Deployment Overview The Cato brownfield deployment workflow consists of four main phases: Phase 1: Export - Cato Management Application → catocli → CSV/JSON files Phase 2: Import - CSV/JSON files → Terraform State (catocli import command) Phase 3: Modify - Edit CSV/JSON files with desired changes (optional) Phase 4: Manage - Terraform State → Terraform Apply → Update CMA Components Cato CLI (catocli): Command-line tool for exporting and importing configurations socket-bulk-sites Module: Terraform module that processes CSV/JSON files Terraform State: Tracks which resources are managed by Terraform Cato Management Application: The source of truth for your actual network configuration Step-by-Step Implementation Step 1: Configure Cato CLI First, configure the CLI with your API credentials: # Interactive configuration (recommended for first-time setup) catocli configure # Or configure with environment variables export CATO_TOKEN="your-api-token-here" export CATO_ACCOUNT_ID="your-account-id" Verify Your Configuration: # View current configuration catocli configure show # List your sites to confirm access catocli entity site list Step 2: Create Your Project Directory Organize your Terraform project with a clear structure: # Create project directory mkdir cato-brownfield-deployment cd cato-brownfield-deployment # Initialize git repository (optional) git init Step 3: Set Up Terraform Configuration Create your main Terraform configuration file (main.tf): terraform { required_version = ">= 0.13" required_providers { cato = { source = "catonetworks/cato" version = "~> 0.0.46" } } } provider "cato" { baseurl = "https://api.catonetworks.com/api/v1/graphql2" token = var.cato_token account_id = var.account_id } NOTE: Please refer to the following Intro to Terraform instructional video for a guide on how to set up authentication, define Terraform variables and manage environment variables like your api token, to securely initialize the Cato Terraform provider. Working with CSV Format The CSV format is ideal when you want to: Edit configurations in Excel or Google Sheets Separate site metadata from network ranges Have human-readable, easily diff-able files Export to CSV # Export all socket sites to CSV format catocli export socket_sites \ -f csv \ --output-directory=config_data_csv This creates: socket_sites_{account_id}.csv - Main site configuration sites_config{account_id}/{site_name}_network_ranges.csv - Per-site network ranges Add CSV Module to Terraform Update your main.tf to include the CSV module with the path to your files: # CSV-based site management module "sites_from_csv" { source = "catonetworks/socket-bulk-sites/cato" sites_csv_file_path = "socket_sites_12345.csv" sites_csv_network_ranges_folder_path = "sites_config_12345/" } Validate CSV Site Location Syntax In the case you have updated your csv with additional sites, and updated the addresses (country code, city, state code, timezone) of those sites, use the following to validate the site location syntax as a pre-flight check before applying changes: catocli import validate_site_location my_socket_sites.csv -f=csv Loading site data from my_socket_sites.csv... Loaded 4 sites ======================================================== VALIDATION RESULTS ======================================================== [✗] Site 1: My X1500 Site (CSV line 2) Location: Wyoming, Usxyz, US-MN Timezone: America/Chicago Status: INVALID - Location not found: Wyoming, Us22, US-MN ======================================================== SUMMARY ======================================================== Total sites processed: 4 Valid sites: 1 (25.0%) Invalid sites: 1 (25.0%) Skipped sites: 2 (50.0%) ======================================================== SKIPPED ROWS (all location fields empty) ======================================================== - My X1500 Site (CSV line 3) - My X1600 Site (CSV line 5) ======================================================== HOW TO FIX INVALID LOCATIONS ======================================================== Use the following catocli query to search for valid locations: catocli query siteLocation -h Validate CSV Required Fields and Import into Terraform State # Initialize Terraform terraform init # Validate csv has all required fields before attempting import catocli import socket_sites_to_tf \ --data-type csv \ --csv-file config_data_csv/socket_sites.csv \ --csv-folder config_data_csv/sites_config/ \ --module-name module.sites_from_csv --validate # Import existing resources into Terraform state catocli import socket_sites_to_tf \ --data-type csv \ --csv-file config_data_csv/socket_sites.csv \ --csv-folder config_data_csv/sites_config/ \ --module-name module.sites_from_csv \ --auto-approve # Review (should show no changes if import was successful) terraform plan Working with JSON Format The JSON format is ideal when you want to: Use programmatic tools to manipulate configurations Keep all configuration in a single file Work with JSON-aware editors and validation tools Export to JSON # Export all socket sites to JSON format catocli export socket_sites \ -f json \ --output-directory=config_data Best Practices 1. Version Control Everything Use a version control system to manage the changes in your configuration files, in this example, the Git client is used to track infrastructure file changes: # Initialize repository git init git add main.tf git commit -m "Initial Terraform configuration" 2. Regular Exports and Backups Create automated backup scripts to regularly export your configuration (sites_backup.sh): #!/bin/bash DATE=$(date +%Y%m%d_%H%M%S) BACKUP_DIR="backups/$DATE" mkdir -p "$BACKUP_DIR" catocli export socket_sites -f json --output-directory="$BACKUP_DIR" Troubleshooting Issue: Import Fails with "Resource Already Exists" Symptom: Error: Resource already exists in state Solution: # List all items in terraform state terraform state list # Show terraform state terraform show # Remove the resource from state and re-import terraform state rm 'module.sites_from_csv.cato_socket_site["Your Cato Site Name Here]' Issue: Plan Shows Unexpected Changes Symptom: Plan: 0 to add, 25 to change, 0 to destroy Solution: # Export fresh configuration from CMA catocli export socket_sites -f json --output-directory=config_data_verify # Compare with your current configuration diff config_data/socket_sites.json config_data_verify/socket_sites.json Conclusion Brownfield deployments for Cato Networks enable you to bring existing infrastructure under version-controlled, automated management without disruption. By following this guide, you can: Eliminate manual configuration errors through automation Maintain consistency across hundreds of sites Accelerate deployments from days to minutes Improve disaster recovery with infrastructure-as-code backups Enable collaboration through Git-based workflows Ensure compliance with standardized configurations Key Takeaways Start Small: Begin with exporting a single site, validate the process, then scale Test First: Always use terraform plan before terraform apply -parallelism=1 Version Control: Git is essential for tracking changes and enabling rollbacks Automate Backups: Regular exports provide disaster recovery capability Document Everything: Clear documentation enables team collaboration Additional Resources Cato API Essentials - Videos Cato Terraform Provider Socket-Bulk-Sites Terraform Module Cato CLI Cato API Documentation Learning Center: Using Terraform with Cato Cloud Online JSON Formatter Happy Infrastructure-as-Code Management!3likes1CommentUnlock the Power of Custom Analytics with Cato CLI Custom Reports
Hello Cato Community! 👋 While Cato's built-in dashboards provide comprehensive visibility into your network, security, and user activity, there are times when you need custom, ad-hoc reporting tailored to your specific business needs, and potentially integrated with 3 rd party platforms. That's where the Cato CLI Custom Reports capability shines! NOTE: All of the data and reports in this article are also available in the Cato MCP server and can be accessed with your favorite AI GPT client. Why Custom Reports? The Cato Management Application offers beautiful, detailed dashboards for monitoring your environment. However, what if you need to: Generate reports on a custom schedule for compliance requirements Analyze specific time periods or unusual patterns Correlate data across multiple dimensions not available in standard dashboards Export specific datasets for executive presentations or third-party analysis tools Build automated reporting workflows integrated with your existing systems Custom Reports give you direct access to the Cato API to extract exactly the data you need, when you need it, in the format you need it. First, you will need to install the catocli, instructions referenced here. pip3 install catocli catocli configure set catocli -h Getting Started - Run Your First Report catocli query appStats '{ "dimension": [{"fieldName": "user_name"}], "measure": [{"aggType": "sum", "fieldName": "traffic"}], "timeFrame": "last.P1D" }' -f csv --csv-filename=my_first_report.csv Integrate! Forward any cli json output to a network endpoint host and port: catocli query appStats '{ "dimension": [{"fieldName": "user_name"}], "measure": [{"aggType": "sum", "fieldName": "traffic"}], "timeFrame": "last.P1D" }' -n 1.2.3.4:514 ______________________________________________________________ Available Example Custom Report Types The Cato CLI provides six powerful report categories, each designed for specific analytical needs, and many of these are documented directly in the help menu of the CLI (click here for more examples): 📊 Account Metrics - Network Performance Analytics What it does: Provides detailed network performance metrics broken down by site, user, or interface over specified time periods. Use cases: Monitor site-level bandwidth utilization trends Identify performance bottlenecks by interface Track latency and packet loss across your global network Capacity planning and bandwidth forecasting Example: Site Performance Analysis catocli query accountMetrics '{ "dimension": [ {"fieldName": "site_name"}, {"fieldName": "interface_name"} ], "measure": [ {"aggType": "avg", "fieldName": "latency"}, {"aggType": "sum", "fieldName": "bandwidth_usage"}, {"aggType": "max", "fieldName": "packet_loss"} ], "timeFrame": "last.P7D" }' -f csv --csv-filename=site_performance_weekly.csv Sample Output: site_name interface_name avg_latency_ms bandwidth_usage_mb max_packet_loss_pct HQ-NewYork WAN1 15.3 45231.5 0.02 Branch-LA WAN2 16.1 12456.2 0.12 --------------------------------------------------------------------------------- 📱 Application Statistics - User Activity & Application Analysis What it does: Aggregated analysis of user activity and application usage, showing total traffic, flow counts, and bandwidth consumption. Use cases: Identify top bandwidth consumers by user or application Security risk assessment based on application risk scores SaaS application adoption tracking Chargeback reporting by department or user Example 1: High-Traffic Users (with Post-Aggregation Filter) catocli query appStats '{ "dimension": [ {"fieldName": "user_name"} ], "measure": [ {"aggType": "sum", "fieldName": "traffic"}, {"aggType": "sum", "fieldName": "flows_created"} ], "appStatsPostAggFilter": [ { "aggType": "sum", "filter": { "fieldName": "traffic", "operator": "gt", "values": ["1073741824"] } } ], "timeFrame": "last.P2D" }' -f csv --csv-filename=high_traffic_users.csv Sample Output: user_name flows_created traffic_mb Mary Berry 669966 4478.5 John Doe 991395 2950.1 What is appStatsPostAggFilter? Post-aggregation filters allow you to filter results after metrics are calculated, similar to a SQL HAVING clause. This is powerful because regular filters (appStatsFilter) apply before aggregation, while post-aggregation filters apply after the metrics are computed. Key Capabilities: Filter on aggregated values (sum, avg, max, min, count, count_distinct) Find users/apps exceeding thresholds (e.g., >1GB traffic) Identify values within specific ranges (e.g., 100-1000 flows) Detect outliers based on statistical measures Supported Operators: is, is_not, gt, gte, lt, lte, between, not_between Post-Aggregation Filter Examples (postAggFilters) High-Traffic Users (>1GB Total Traffic) Find users whose total traffic exceeds 1GB over the last 2 days: catocli query appStats '{ "dimension": [ {"fieldName": "user_name"} ], "measure": [ {"aggType": "sum", "fieldName": "traffic"}, {"aggType": "sum", "fieldName": "flows_created"} ], "appStatsPostAggFilter": [ { "aggType": "sum", "filter": { "fieldName": "traffic", "operator": "gt", "values": ["1073741824"] } } ], "appStatsSort": [ {"fieldName": "traffic", "order": "desc"} ], "timeFrame": "last.P2D" }' -f csv --csv-filename=appstats_high_traffic_users.csv Applications with Average Traffic Above Threshold Identify applications where average traffic per flow exceeds 10MB: catocli query appStats '{ "dimension": [ {"fieldName": "application_name"} ], "measure": [ {"aggType": "avg", "fieldName": "traffic"}, {"aggType": "count", "fieldName": "flows_created"}, {"aggType": "sum", "fieldName": "traffic"} ], "appStatsPostAggFilter": [ { "aggType": "avg", "filter": { "fieldName": "traffic", "operator": "gte", "values": ["10485760"] } } ], "appStatsSort": [ {"fieldName": "traffic", "order": "desc"} ], "timeFrame": "last.P7D" }' -f csv --csv-filename=appstats_high_avg_traffic_apps.csv --------------------------------------------------------------------------------- 📈 Application Statistics Time Series - Traffic Analysis Over Time What it does: Shows application and user traffic patterns over time with hourly/daily/custom time bucket breakdowns. Use cases: Peak usage analysis and capacity planning Identify traffic trends and seasonal patterns Anomaly detection (unusual spikes or drops) Business hours vs. after-hours usage comparison Example: Hourly Traffic Breakdown catocli query appStatsTimeSeries '{ "buckets": 24, "dimension": [ {"fieldName": "application_name"}, {"fieldName": "user_name"} ], "perSecond": false, "measure": [ {"aggType": "sum", "fieldName": "upstream"}, {"aggType": "sum", "fieldName": "downstream"}, {"aggType": "sum", "fieldName": "traffic"} ], "timeFrame": "last.P1D" }' -f csv --csv-filename=hourly_traffic_patterns.csv Why use perSecond: false? When analyzing throughput statistics (upstream, downstream, traffic), set "perSecond": false to get accurate byte counts instead of rates. This gives you actual data transfer volumes over time. ______________________________________________________________ 🔒 Events Time Series - Security Events & Threat Analysis What it does: Time-based analysis of security events, including IPS alerts, threat detections, connectivity events, and policy violations. Use cases: Security incident trending and correlation IPS/IDS event pattern analysis Threat actor tracking over time Compliance reporting for security events Example: IPS Events Trending catocli query eventsTimeSeries '{ "buckets": 168, "dimension": [ {"fieldName": "event_type"}, {"fieldName": "threat_severity"} ], "measure": [ {"aggType": "count", "fieldName": "event_id"} ], "eventsFilter": [ { "fieldName": "event_category", "operator": "in", "values": ["IPS", "Threat Prevention"] } ], "timeFrame": "last.P7D" }' -f csv --csv-filename=ips_events_weekly.csv ______________________________________________________________ 🔌 Socket Port Metrics - Socket Interface Performance Analysis What it does: Aggregated performance metrics for Socket (SD-WAN) ports/interfaces, including bandwidth utilization, packet statistics, and error rates. Use cases: WAN interface health monitoring Compare primary vs. backup link performance Interface utilization and capacity planning Troubleshoot connectivity issues { "socketPortMetricsDimension": [ // Fields to group results by {"fieldName": "socket_interface"}, {"fieldName": "device_id"}, {"fieldName": "site_name"} ], "socketPortMetricsFilter": [], // Filters to apply to data "socketPortMetricsMeasure": [ // Metrics to calculate {"aggType": "sum", "fieldName": "bytes_upstream"}, {"aggType": "sum", "fieldName": "bytes_downstream"}, {"aggType": "sum", "fieldName": "bytes_total"} ], "socketPortMetricsSort": [], // Sort criteria "timeFrame": "last.P1D" // Time range for analysis } ______________________________________________________________ ⏱️ Socket Port Time Series - Socket Performance Metrics Over Time What it does: Time-based analysis of Socket interface metrics, tracking performance trends, utilization patterns, and health indicators across time buckets. Use cases: Peak traffic period identification Link failover event correlation Performance degradation detection Historical bandwidth utilization trending Daily Traffic Patterns Analyze interface traffic patterns throughout the day: catocli query socketPortMetricsTimeSeries '{ "buckets": 24, "socketPortMetricsDimension": [ {"fieldName": "socket_interface"}, {"fieldName": "site_name"} ], "socketPortMetricsMeasure": [ {"aggType": "sum", "fieldName": "bytes_downstream"}, {"aggType": "sum", "fieldName": "bytes_upstream"}, {"aggType": "sum", "fieldName": "bytes_total"} ], "perSecond": false, "timeFrame": "last.P1D" }' -f csv --csv-filename socketPortMetricsTimeSeries_daily_traffic_patterns.csv Peak Hour Identification Identify peak traffic hours with high-resolution monitoring: ______________________________________________________________ catocli query socketPortMetricsTimeSeries '{ "buckets": 96, "socketPortMetricsDimension": [ {"fieldName": "socket_interface"} ], "socketPortMetricsMeasure": [ {"aggType": "sum", "fieldName": "bytes_total"} ], "perSecond": false, "timeFrame": "last.P1D" }' -f csv --csv-filename socketPortMetricsTimeSeries_peak_hour_analysis.csv ______________________________________________________________ Additional Resources 📚 Comprehensive Documentation: User Activity Report Format Cato CLI GitHub Repository Cato API Documentation 💡 Pro Tips: Use --append-timestamp to add timestamps to filename for historical tracking Export to CSV for easy analysis in Excel, Tableau, or other BI tools Set up scheduled scripts (cron/Task Scheduler) for automated reporting Combine multiple queries to build comprehensive dashboards Real-World Use Case Examples Compliance Reporting Generate monthly bandwidth usage reports by user for chargeback: catocli query appStats '{ "dimension": [{"fieldName": "user_name"}, {"fieldName": "department"}], "measure": [{"aggType": "sum", "fieldName": "traffic"}], "timeFrame": "last.P1M" }' -f csv --csv-filename=monthly_usage_chargeback.csv --append-timestamp Security Operations Daily high-risk application usage tracking: catocli query appStats '{ "appStatsFilter": [{"fieldName": "risk_score", "operator": "gte", "values": ["7"]}], "dimension": [{"fieldName": "application_name"}, {"fieldName": "user_name"}], "measure": [{"aggType": "sum", "fieldName": "traffic"}], "timeFrame": "last.P1D" }' -f csv --csv-filename=high_risk_apps_daily.csv --append-timestamp Capacity Planning Track site bandwidth trends for capacity planning: catocli query accountMetrics '{ "dimension": [{"fieldName": "site_name"}], "measure": [ {"aggType": "avg", "fieldName": "bandwidth_usage"}, {"aggType": "max", "fieldName": "bandwidth_usage"} ], "timeFrame": "last.P3M" }' -f csv --csv-filename=site_capacity_quarterly.csv ______________________________________________________________ Community Feedback We'd love to hear how you're using Custom Reports! Share your: 📊 Creative use cases and analysis scenarios 🔧 Automation scripts and workflows 💡 Tips and best practices ❓ Questions and feature requests Happy reporting! 🎉 For technical support or feature requests, please visit and post the Cato Community or open an issue on the Cato CLI GitHub repository.1like0CommentsSite Management API Multi-Tool Workshop
Welcome to this hands-on workshop where you'll learn to manage Cato Networks infrastructure (socket sites, network interfaces and network ranges) using three different tools in a real-world workflow. This exercise outlines the API structure for managing site configurations, and demonstrates the flexibility of the Cato API ecosystem, while teaching you when and how to use each tool for maximum efficiency. What You'll Learn By the end of this workshop, you'll be able to: Install, configure and use the Cato API Explorer (containerized web-based GUI) providing code generation including syntax for python, catocli, and CURL Install, configure and use the Cato CLI to both read and update configurations Create new Cato sites, network interfaces and add network ranges to interfaces via API Why Use Multiple Tools? In real-world scenarios, you'll often use different tools for different tasks: Tool Best For Use Case API Explorer Testing new APIs, one-off changes, learning Initial site creation, exploring API capabilities Cato CLI OS agnostic tool for bulk operations, automation scripts Updating multiple sites, generating reports cURL Generic method of calling APIs directly, troubleshooting Integrating with existing automation, minimal dependencies Prerequisites Before starting, ensure you have the following installed on your machine: Install Python Install Cato CLI Install Docker Desktop on Mac, Windows, or Linux NOTE: Manually start the docker application before checking if it is running open -a docker Validate Required Tools # 1. Docker (for API Explorer) docker --version # 2. Python 3.6+ python3 --version # 3. Cato CLI catocli --version # 4. CURL curl --version Cato API Credentials You'll need: API Token: Generated from the Cato Management Application. Refer to Generating API Keys for the Cato API. NOTE: Save the token securely (you won't be able to view it again). Account ID: Your Cato account number found in Account > Account Info or in the CMA URL, example: https://system.cc.catonetworks.com/#/account/{account_id}/ Site Management API Workshop Overview The site workshop workflow consists of four main phases: Phase 1: Create Site using Cato API Explorer (Docker Web UI) Phase 2: Retrieve Site ID using Cato CLI Phase 3: Update Interface using Cato CLI Phase 4: Retrieve Interface ID using Cato CLI Phase 5: Add Network Range using CURL from Cato API Explorer Phase 1: Create a Site Using API Explorer Step 1.1: Launch the API Explorer The Cato API Explorer is a Docker-based web application that provides an interactive GUI for testing GraphQL API calls. mkdir cato-api-explorer cd cato-api-explorer # Create docker-compose.yml cat << 'EOF' > docker-compose.yml services: cato-api-explorer: container_name: cato-api-explorer image: ghcr.io/catonetworks/cato-api-explorer:latest ports: - 8080:8080 - 8443:443 EOF # Pull and start the container docker-compose pull docker-compose up -d Step 1.2: Access the API Explorer # Open in your browser open http://localhost:8080 Step 1.3: Configure API Credentials Click on the Settings tab (gear icon) Enter your API Endpoint, API Token, and Account ID Click Save Settings Step 1.4: Create the Site Follow these steps in the API Explorer: Navigate to the GraphQL API tab and enter addSocketSite in the API Operation field Select mutation.site.addSocketSite() from the dropdown Click Edit on the addSocketSiteInput field and fill out required fields Change connectionType to SOCKET_X1600, and site name to My 1600 Site Configure the siteLocation with your desired city, state, and country Request Variables should reflect: { "accountId": "12345", "addSocketSiteInput": { "connectionType": "SOCKET_X1600", "name": "My 1600 Site", "nativeNetworkRange": "10.111.0.0/24", "siteLocation": { "city": "San Diego", "countryCode": "US", "stateCode": "US-CA", "timezone": "America/Los_Angeles" }, "siteType": "BRANCH" } } Click "Execute" and save the returned siteID. Example mutation.site.addSocketSite() screenshot in API Explorer: Phase 2: Retrieve Site ID Using Cato CLI Now that we've created the site, let's verify it exists and retrieve its ID using the Cato CLI. Step 2.1: Configure Cato CLI # Interactive configuration catocli configure Step 2.2: Search for the Site # Use help menus catocli -h catocli entity -h # Search by site name catocli entity site list -s "My 1600 Site" # Pretty print JSON output catocli entity site -p # Format as CSV catocli entity site -s "My 1600 Site" -f csv Phase 3: Update Interface Using Cato CLI Now we'll update the site's network interface configuration using syntax generated from the API Explorer. Step 3.1: List Existing Interfaces By default when creating a Cato site, the site will have one LAN interface and one WAN interface. The default LAN interface will be configured as the native range used when creating the site. # Use entityLookup to get interface info catocli query entityLookup '{ "entityInput": { "id": "12345", "type": "site" }, "type": "networkInterface" }' Step 3.2: Update the Interface In the API Explorer, configure the interface update: Navigate to GraphQL API tab and enter updateSocketInterface Select INT_7 as the interface to configure Set destType to LAN Configure subnet and localIp Request Variables should reflect: { "accountId": "12345", "siteId": "172807", "socketInterfaceId": "INT_7", "updateSocketInterfaceInput": { "destType": "LAN", "lan": { "localIp": "10.112.0.1", "subnet": "10.112.0.0/24" } } } Example mutation.site.() screenshot in API Explorer: Step 3.3: Execute with Cato CLI Copy the Cato CLI syntax from the API Explorer and execute using your siteID: catocli mutation site updateSocketInterface '{ "siteId": "12345", "socketInterfaceId": "INT_7", "updateSocketInterfaceInput": { "destType": "LAN", "lan": { "localIp": "10.112.0.1", "subnet": "10.112.0.0/24" } } }' Phase 4: Retrieve Interface ID After updating the interface, retrieve the Interface Entity ID for adding network ranges: # Retrieve interface details catocli entity networkInterface list -f csv # Or use entityLookup catocli query entityLookup '{ "entityInput": {"id": "12345", "type": "site"}, "type": "networkInterface" }' Save the Interface Entity ID for the INT_7 interface for use in Phase 5 Phase 5: Add Network Range Using cURL Finally, we'll add a network range to the INT_7 interface using a raw cURL command. Step 5.1: Configure in API Explorer In API Explorer, navigate to addNetworkRange Select the LAN_7 interface Configure network range parameters (name, subnet, VLAN, DHCP) Uncheck Mask secret key checkbox to reveal your API key Example mutation.site.() screenshot in API Explorer: Step 5.2: Execute cURL Command Copy the cURL command from the API Explorer and execute in your terminal: curl -k -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "x-API-Key: YOUR_API_KEY_HERE" \ 'https://api.catonetworks.com/api/v1/graphql2' \ --data '{ "query": "mutation siteAddNetworkRange ( $lanSocketInterfaceId:ID! $addNetworkRangeInput:AddNetworkRangeInput! $accountId:ID! ) { site ( accountId:$accountId ) { addNetworkRange ( lanSocketInterfaceId:$lanSocketInterfaceId input:$addNetworkRangeInput ) { networkRangeId } } }", "variables": { "accountId": "11362", "addNetworkRangeInput": { "dhcpSettings": { "dhcpType": "ACCOUNT_DEFAULT" }, "localIp": "10.113.0.1", "name": "Custom Network", "rangeType": "VLAN", "subnet": "10.113.0.0/24", "vlan": 123 }, "lanSocketInterfaceId": "207469" }, "operationName": "siteAddNetworkRange" }' Expected Response: Network Range ID returned { "data": { "site": { "addNetworkRange": { "networkRangeId": "UzY1NDI4Mg==" } } } } Key Takeaways When to Use Each Tool API Explorer (Web GUI): Initial testing and exploration Learning the API structure One-off changes during troubleshooting Generating cURL and Python templates Cato CLI (catocli): Bulk operations and reporting Automation scripts Quick queries from command line CSV/JSON export capabilities cURL (Raw API): Troubleshooting and calling APIs directly Minimal dependencies Custom error handling with verbose output (-v flag) Integration examples for any programming language Additional Resources Cato API Essentials - Videos Cato CLI Cato API Documentation Congratulations on Completing the Workshop! You now have hands-on experience with three powerful API tools3likes0CommentsDeploying the Cato Sentinel Connector | Full Installation & Setup Guide for Azure Sentinel
In this video, we’ll walk you through the complete setup of the Cato Sentinel Connector, connecting your Cato Networks environment to Microsoft Azure Sentinel for unified visibility and smarter threat detection. What You’ll Learn: Setting up your Azure environment (Resource Group, Log Analytics Workspace, Sentinel) Creating your Cato API Key and finding your Account ID Deploying the ARM template / Azure Function App to ingest Cato Events, CEF, Audit Logs & XDR Stories Configuring data ingestion and filters for efficient log collection Installing the Sentinel Workbook to visualize Cato data Best practices and tuning tips for ingestion and workspace setup Who It’s For: Security engineers, SOC analysts, and IT professionals using Cato Networks who want to enhance visibility through Azure Sentinel. Prerequisites: Active Cato Networks account with API access Azure subscription with Sentinel enabled Permissions to deploy ARM templates and Function Apps By the end of this tutorial, you’ll have a fully operational integration between Cato Networks and Azure Sentinel, empowering your team with advanced insights and real-time threat correlation. Github repository: https://github.com/catonetworks/cato-sentinel-connect
1like0CommentsIntroduction to Terraform - From Setup to Your First Apply
Welcome to this hands-on introduction to Terraform, the powerful Infrastructure as Code (IaC) tool that lets you automate cloud resource provisioning with ease. In this video, you’ll learn: How to install Terraform on your system What Terraform state and commands are and how they work How to run your first Terraform apply to deploy real infrastructure By the end of this session, you’ll have a working Terraform environment and a solid understanding of how to manage your infrastructure efficiently and reproducibly. Perfect for: Beginners, DevOps engineers, cloud professionals, and anyone looking to get started with Infrastructure as Code. Resources mentioned: Terraform Official Documentation: https://developer.hashicorp.com/terraform/docs Terraform Download: https://developer.hashicorp.com/terraform/install
0likes0Comments