Recent Content
Site Management API Multi-Tool Workshop
Welcome to this hands-on workshop where you'll learn to manage Cato Networks infrastructure (socket sites, network interfaces and network ranges) using three different tools in a real-world workflow. This exercise outlines the API structure for managing site configurations, and demonstrates the flexibility of the Cato API ecosystem, while teaching you when and how to use each tool for maximum efficiency. What You'll Learn By the end of this workshop, you'll be able to: Install, configure and use the Cato API Explorer (containerized web-based GUI) providing code generation including syntax for python, catocli, and CURL Install, configure and use the Cato CLI to both read and update configurations Create new Cato sites, network interfaces and add network ranges to interfaces via API Why Use Multiple Tools? In real-world scenarios, you'll often use different tools for different tasks: Tool Best For Use Case API Explorer Testing new APIs, one-off changes, learning Initial site creation, exploring API capabilities Cato CLI OS agnostic tool for bulk operations, automation scripts Updating multiple sites, generating reports cURL Generic method of calling APIs directly, troubleshooting Integrating with existing automation, minimal dependencies Prerequisites Before starting, ensure you have the following installed on your machine: Install Python Install Cato CLI Install Docker Desktop on Mac, Windows, or Linux NOTE: Manually start the docker application before checking if it is running open -a docker Validate Required Tools # 1. Docker (for API Explorer) docker --version # 2. Python 3.6+ python3 --version # 3. Cato CLI catocli --version # 4. CURL curl --version Cato API Credentials You'll need: API Token: Generated from the Cato Management Application. Refer to Generating API Keys for the Cato API. NOTE: Save the token securely (you won't be able to view it again). Account ID: Your Cato account number found in Account > Account Info or in the CMA URL, example: https://system.cc.catonetworks.com/#/account/{account_id}/ Site Management API Workshop Overview The site workshop workflow consists of four main phases: Phase 1: Create Site using Cato API Explorer (Docker Web UI) Phase 2: Retrieve Site ID using Cato CLI Phase 3: Update Interface using Cato CLI Phase 4: Retrieve Interface ID using Cato CLI Phase 5: Add Network Range using CURL from Cato API Explorer Phase 1: Create a Site Using API Explorer Step 1.1: Launch the API Explorer The Cato API Explorer is a Docker-based web application that provides an interactive GUI for testing GraphQL API calls. mkdir cato-api-explorer cd cato-api-explorer # Create docker-compose.yml cat << 'EOF' > docker-compose.yml services: cato-api-explorer: container_name: cato-api-explorer image: ghcr.io/catonetworks/cato-api-explorer:latest ports: - 8080:8080 - 8443:443 EOF # Pull and start the container docker-compose pull docker-compose up -d Step 1.2: Access the API Explorer # Open in your browser open http://localhost:8080 Step 1.3: Configure API Credentials Click on the Settings tab (gear icon) Enter your API Endpoint, API Token, and Account ID Click Save Settings Step 1.4: Create the Site Follow these steps in the API Explorer: Navigate to the GraphQL API tab and enter addSocketSite in the API Operation field Select mutation.site.addSocketSite() from the dropdown Click Edit on the addSocketSiteInput field and fill out required fields Change connectionType to SOCKET_X1600, and site name to My 1600 Site Configure the siteLocation with your desired city, state, and country Request Variables should reflect: { "accountId": "12345", "addSocketSiteInput": { "connectionType": "SOCKET_X1600", "name": "My 1600 Site", "nativeNetworkRange": "10.111.0.0/24", "siteLocation": { "city": "San Diego", "countryCode": "US", "stateCode": "US-CA", "timezone": "America/Los_Angeles" }, "siteType": "BRANCH" } } Click "Execute" and save the returned siteID. Example mutation.site.addSocketSite() screenshot in API Explorer: Phase 2: Retrieve Site ID Using Cato CLI Now that we've created the site, let's verify it exists and retrieve its ID using the Cato CLI. Step 2.1: Configure Cato CLI # Interactive configuration catocli configure Step 2.2: Search for the Site # Use help menus catocli -h catocli entity -h # Search by site name catocli entity site list -s "My 1600 Site" # Pretty print JSON output catocli entity site -p # Format as CSV catocli entity site -s "My 1600 Site" -f csv Phase 3: Update Interface Using Cato CLI Now we'll update the site's network interface configuration using syntax generated from the API Explorer. Step 3.1: List Existing Interfaces By default when creating a Cato site, the site will have one LAN interface and one WAN interface. The default LAN interface will be configured as the native range used when creating the site. # Use entityLookup to get interface info catocli query entityLookup '{ "entityInput": { "id": "12345", "type": "site" }, "type": "networkInterface" }' Step 3.2: Update the Interface In the API Explorer, configure the interface update: Navigate to GraphQL API tab and enter updateSocketInterface Select INT_7 as the interface to configure Set destType to LAN Configure subnet and localIp Request Variables should reflect: { "accountId": "12345", "siteId": "172807", "socketInterfaceId": "INT_7", "updateSocketInterfaceInput": { "destType": "LAN", "lan": { "localIp": "10.112.0.1", "subnet": "10.112.0.0/24" } } } Example mutation.site.() screenshot in API Explorer: Step 3.3: Execute with Cato CLI Copy the Cato CLI syntax from the API Explorer and execute using your siteID: catocli mutation site updateSocketInterface '{ "siteId": "12345", "socketInterfaceId": "INT_7", "updateSocketInterfaceInput": { "destType": "LAN", "lan": { "localIp": "10.112.0.1", "subnet": "10.112.0.0/24" } } }' Phase 4: Retrieve Interface ID After updating the interface, retrieve the Interface Entity ID for adding network ranges: # Retrieve interface details catocli entity networkInterface list -f csv # Or use entityLookup catocli query entityLookup '{ "entityInput": {"id": "12345", "type": "site"}, "type": "networkInterface" }' Save the Interface Entity ID for the INT_7 interface for use in Phase 5 Phase 5: Add Network Range Using cURL Finally, we'll add a network range to the INT_7 interface using a raw cURL command. Step 5.1: Configure in API Explorer In API Explorer, navigate to addNetworkRange Select the LAN_7 interface Configure network range parameters (name, subnet, VLAN, DHCP) Uncheck Mask secret key checkbox to reveal your API key Example mutation.site.() screenshot in API Explorer: Step 5.2: Execute cURL Command Copy the cURL command from the API Explorer and execute in your terminal: curl -k -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "x-API-Key: YOUR_API_KEY_HERE" \ 'https://api.catonetworks.com/api/v1/graphql2' \ --data '{ "query": "mutation siteAddNetworkRange ( $lanSocketInterfaceId:ID! $addNetworkRangeInput:AddNetworkRangeInput! $accountId:ID! ) { site ( accountId:$accountId ) { addNetworkRange ( lanSocketInterfaceId:$lanSocketInterfaceId input:$addNetworkRangeInput ) { networkRangeId } } }", "variables": { "accountId": "11362", "addNetworkRangeInput": { "dhcpSettings": { "dhcpType": "ACCOUNT_DEFAULT" }, "localIp": "10.113.0.1", "name": "Custom Network", "rangeType": "VLAN", "subnet": "10.113.0.0/24", "vlan": 123 }, "lanSocketInterfaceId": "207469" }, "operationName": "siteAddNetworkRange" }' Expected Response: Network Range ID returned { "data": { "site": { "addNetworkRange": { "networkRangeId": "UzY1NDI4Mg==" } } } } Key Takeaways When to Use Each Tool API Explorer (Web GUI): Initial testing and exploration Learning the API structure One-off changes during troubleshooting Generating cURL and Python templates Cato CLI (catocli): Bulk operations and reporting Automation scripts Quick queries from command line CSV/JSON export capabilities cURL (Raw API): Troubleshooting and calling APIs directly Minimal dependencies Custom error handling with verbose output (-v flag) Integration examples for any programming language Additional Resources Cato API Essentials - Videos Cato CLI Cato API Documentation Congratulations on Completing the Workshop! You now have hands-on experience with three powerful API tools104Views2likes0CommentsOffice mode for Mac users
We have AlwaysOn policy enabled for all the users and it is causing some troubles for Mac users. Most of our users are Windows and when they come to the office behind the socket, the client detects Office Mode automatically, users do not need to enter credentials, and they get network connectivity just fine. However our Mac users would need to enter credentials in the cato client for it to detect the office mode. If they do not enter credentials, they do not have a network connection. Our Mac users are not happy with this since it does add some inconvenience when they are in the office. I am wondering if anyone has the same challenge and what are possible workarounds.15Views0likes1CommentTerraform: IPsec site creation with Responder-only and destination type FQDN possible?
Hi, see subject. When trying to setup an ipsec site (IKEv2) in responder only mode and with destination type FQDN for primary and secondary tunnel, terraform (in fact opentofu), gives this error: │ Error: Cato API error in SiteAddIpsecIkeV2SiteTunnels │ │ with cato_ipsec_site.Vienna, │ on main.tf line 73, in resource "cato_ipsec_site" "Vienna": │ 73: resource "cato_ipsec_site" "Vienna" { │ │ {"networkErrors":{"code":422,"message":"Response body {\"errors\":[{\"message\":\"input: │ variable.updateIpsecIkeV2SiteTunnelsInput.primary.tunnels[0].tunnelId is not a valid │ IPSecV2InterfaceId\",\"path\":[\"variable\",\"updateIpsecIkeV2SiteTunnelsInput\",\"primary\",\"tunnels\",0,\"tunnelId\"]}],\"data\":null}"},"graphqlErrors":[{"message":"input: │ variable.updateIpsecIkeV2SiteTunnelsInput.primary.tunnels[0].tunnelId is not a valid │ IPSecV2InterfaceId","path":["variable","updateIpsecIkeV2SiteTunnelsInput","primary","tunnels",0,"tunnelId"]}]} ╵ That appears when adding the "tunnels" section. Without that section, a deployment if possible. Obviously, the tunnels section is required. --------------------snip-------------------- connection_mode = "RESPONDER_ONLY" identification_type = "IPV4" primary = { destination_type = "FQDN" tunnels = [ { public_site_ip = "10.10.10.10" psk = "abcABC1234567!!" //last_mile_bw = { //downstream = 10 //upstream = 10 } ] } ---------------snap------------------------------------- Is that supported with the terraform provider currently? Thanks, Christian66Views0likes3CommentsBrownfield Deployments for Cato Network Sites
Have you ever found yourself managing dozens or even hundreds of Cato Network sites manually through the Cato Management Application (CMA), wishing there was a better way to maintain consistency, version control, and automate? Cato Brownfield Deployments (or Day 2 Operations) solves exactly this problem by enabling you to bring your existing Cato infrastructure under Terraform management without recreating everything from scratch. This comprehensive guide will walk you through the process of exporting existing Cato Network site configurations, modifying them as needed, and importing them into Terraform state for infrastructure-as-code (IaC) management. Why This Matters Version Control: Track all infrastructure changes in Git Consistency: Ensure standardized configurations across all sites Automation: Enable CI/CD pipelines for network infrastructure Disaster Recovery: Quick restoration from configuration backups Bulk Updates: Modify multiple sites simultaneously with confidence What is a Cato Brownfield Deployment? In infrastructure terminology: Greenfield Deployment: Building infrastructure from scratch with no existing resources Brownfield Deployment: Managing and updating existing infrastructure that's already running in production, in this case, sites that are already configured in the Cato Management Application (CMA). NOTE: Bulk export and import of sites for brownfield deployments apply to physical socket site deployments (X1500, X1600, X1600_LTE, X1700), as virtual socket sites for cloud deployments include separate cloud resources that are covered by terraform modules found here. For Cato Networks, a brownfield deployment means: You already have Socket sites, network interfaces, and network ranges configured in the CMA You want to start to manage, or take over the configuration of these existing resources using Terraform You don't want to delete and recreate everything (which would cause network downtime) You need to import existing configurations into Terraform state The socket-bulk-sites Terraform module, combined with the Cato CLI (catocli), makes this process straightforward and safe. Prerequisites Before starting, ensure you have the following installed on your machine: Install Terraform Install Python Install Cato CLI Install Git (optional) NOTE: It is a best practice to use a version control system to track changes in code, and configuration files, this example highlights how to use the git cli client, and github to do so. Validate Required Tools # Python 3.6 or later python3 --version # Terraform 0.13 or later terraform --version # Cato CLI tool pip3 install catocli # Git (recommended for version control) git --version Cato API Credentials You'll need: API Token: Generated from the Cato Management Application. Refer to Generating API Keys for the Cato API. NOTE: Save the token securely (you won't be able to view it again). Account ID: Your Cato account number found in Account > Account Info or in the CMA URL, example: https://system.cc.catonetworks.com/#/account/{account_id}/ Cato Brownfield Deployment Overview The Cato brownfield deployment workflow consists of four main phases: Phase 1: Export - Cato Management Application → catocli → CSV/JSON files Phase 2: Import - CSV/JSON files → Terraform State (catocli import command) Phase 3: Modify - Edit CSV/JSON files with desired changes (optional) Phase 4: Manage - Terraform State → Terraform Apply → Update CMA Components Cato CLI (catocli): Command-line tool for exporting and importing configurations socket-bulk-sites Module: Terraform module that processes CSV/JSON files Terraform State: Tracks which resources are managed by Terraform Cato Management Application: The source of truth for your actual network configuration Step-by-Step Implementation Step 1: Configure Cato CLI First, configure the CLI with your API credentials: # Interactive configuration (recommended for first-time setup) catocli configure # Or configure with environment variables export CATO_TOKEN="your-api-token-here" export CATO_ACCOUNT_ID="your-account-id" Verify Your Configuration: # View current configuration catocli configure show # List your sites to confirm access catocli entity site Step 2: Create Your Project Directory Organize your Terraform project with a clear structure: # Create project directory mkdir cato-brownfield-deployment cd cato-brownfield-deployment # Initialize git repository (optional) git init Step 3: Set Up Terraform Configuration Create your main Terraform configuration file (main.tf): terraform { required_version = ">= 0.13" required_providers { cato = { source = "catonetworks/cato" version = "~> 0.0.46" } } } provider "cato" { baseurl = "https://api.catonetworks.com/api/v1/graphql2" token = var.cato_token account_id = var.account_id } NOTE: Please refer to the following Intro to Terraform instructional video for a guide on how to set up authentication, define Terraform variables and manage environment variables like your api token, to securely initialize the Cato Terraform provider. Working with CSV Format The CSV format is ideal when you want to: Edit configurations in Excel or Google Sheets Separate site metadata from network ranges Have human-readable, easily diff-able files Export to CSV # Export all socket sites to CSV format catocli export socket_sites \ -f csv \ --output-directory=config_data_csv This creates: socket_sites.csv - Main site configuration sites_config/{site_name}_network_ranges.csv - Per-site network ranges Add CSV Module to Terraform Update your main.tf to include the CSV module: # CSV-based site management module "sites_from_csv" { source = "catonetworks/socket-bulk-sites/cato" sites_csv_file_path = "config_data_csv/socket_sites.csv" sites_csv_network_ranges_folder_path = "config_data_csv/sites_config/" } Import CSV Configuration into Terraform State # Initialize Terraform terraform init # Import existing resources into Terraform state catocli import socket_sites_to_tf \ --data-type csv \ --csv-file config_data_csv/socket_sites.csv \ --csv-folder config_data_csv/sites_config/ \ --module-name module.sites_from_csv \ --auto-approve # Review (should show no changes if import was successful) terraform plan Working with JSON Format The JSON format is ideal when you want to: Use programmatic tools to manipulate configurations Keep all configuration in a single file Work with JSON-aware editors and validation tools Export to JSON # Export all socket sites to JSON format catocli export socket_sites \ -f json \ --output-directory=config_data Best Practices 1. Version Control Everything Use a version control system to manage the changes in your configuration files, in this example, the Git client is used to track infrastructure file changes: # Initialize repository git init git add main.tf git commit -m "Initial Terraform configuration" 2. Regular Exports and Backups Create automated backup scripts to regularly export your configuration (sites_backup.sh): #!/bin/bash DATE=$(date +%Y%m%d_%H%M%S) BACKUP_DIR="backups/$DATE" mkdir -p "$BACKUP_DIR" catocli export socket_sites -f json --output-directory="$BACKUP_DIR" Troubleshooting Issue: Import Fails with "Resource Already Exists" Symptom: Error: Resource already exists in state Solution: # List all items in terraform state terraform state list # Show terraform state terraform show # Remove the resource from state and re-import terraform state rm 'module.sites_from_csv.cato_socket_site["Your Cato Site Name Here]' Issue: Plan Shows Unexpected Changes Symptom: Plan: 0 to add, 25 to change, 0 to destroy Solution: # Export fresh configuration from CMA catocli export socket_sites -f json --output-directory=config_data_verify # Compare with your current configuration diff config_data/socket_sites.json config_data_verify/socket_sites.json Conclusion Brownfield deployments for Cato Networks enable you to bring existing infrastructure under version-controlled, automated management without disruption. By following this guide, you can: Eliminate manual configuration errors through automation Maintain consistency across hundreds of sites Accelerate deployments from days to minutes Improve disaster recovery with infrastructure-as-code backups Enable collaboration through Git-based workflows Ensure compliance with standardized configurations Key Takeaways Start Small: Begin with exporting a single site, validate the process, then scale Test First: Always use terraform plan before terraform apply -parallelism=1 Version Control: Git is essential for tracking changes and enabling rollbacks Automate Backups: Regular exports provide disaster recovery capability Document Everything: Clear documentation enables team collaboration Additional Resources Cato API Essentials - Videos Cato Terraform Provider Socket-Bulk-Sites Terraform Module Cato CLI Cato API Documentation Learning Center: Using Terraform with Cato Cloud Happy Infrastructure-as-Code Management!83Views3likes1CommentRegarding the execution interval of the Azure Functions template for Cato log integration
I'd like to confirm something about Azure Functions processing. ■Requirements - To forward Cato SASE logs to an Azure Log Analytics workspace, I'm using the following Cato log integration template. https://github.com/catonetworks/cato-sentinel-connect/tree/main -The Azure Functions specs are as follows: OS: Linux Plan: App Service Plan Size: P1v3 Type: Custom Handler Trigger: Timer trigger (30-second interval) The following logs are targeted for integration: -CommonSecurityLog Log size: Approximately 2.5-5MB per 30 seconds (300-600MB per hour) -CatoAuditEngine_CL Log size: Less than 0.01MB per 30 seconds ■Question I'm using a 30-second timer trigger, but the actual execution interval is 2 minutes. (The execution interval can be confirmed by counting the "Functions Execution Count" metric.) Please confirm the following three points. 1. Is the change in execution interval due to a large log volume? 2. What should I do to set the execution interval to 30 seconds? Would scaling up Azure Functions be effective? 3. Even if execution takes a long time, is the log integration being executed without any problems? Are there any logs being missed? Note that in the test environment (log volume per 30 seconds is less than 0.01MB for both tables), execution is performed every 30 seconds.7Views0likes0CommentsDecember 2025 Winner - @Nath
3 MIN READ Congratulations to Nath for winning the Cato SWIFT award for community excellence and achievement for December 2025! Name and Job Title Nathan, Network Engineer How long have you been in IT/Software/Cybersecurity? I’ve been working in IT for just over seven years, building up experience across networking, security, and infrastructure. Most of that time has been focused on enterprise network operations and secure connectivity. What’s your favorite part of your job right now? The favourite part of my job is implementing new Cato features, especially when they’re ones we’ve been waiting for via the roadmap, or that originated from our own feature requests (there's been a few!). It’s always satisfying to see those improvements come to life and make a real impact in production. How long have you worked with Cato? I’ve worked with Cato for around four and a half years. I was involved in the initial selection of Cato as our SD-WAN/SASE vendor and played a key role in implementing the migration. Since then, I’ve continued to stay hands-on with the platform through operations, feature testing, and early access programs. What is the number one thing Cato has helped you achieve? Cato has given us a true single pane of glass for managing our network and security policies. Users now get a consistent experience wherever they connect from, with the same policies applied globally. It’s brought real consistency and simplification across the environment, and troubleshooting issues is now much quicker and easier. We still get the occasional incident that initially stumps up - but MTTR is significantly less) What do you want to see more of on the Cato Connect Community? I’d love to see more technical deep-dives and interactive sessions around upcoming features — for example, workshops where Cato shares what’s on the roadmap and customers can give input on how those features might impact their environments or influence GUI design. Real-world deployment stories or troubleshooting case studies from other customers would also be great to learn from - especially because there are so many legacy topologies out there that necessitate a different migration approach than which was necessary for us. What do you do for fun when you’re not working? I recently completed a part-time Masters degree in Advanced Networking, which was challenging but really rewarding. Outside of that, I’m a bodybuilder and train in the gym around six times a week — it’s a big part of my lifestyle. I also enjoy playing the piano as a creative outlet away from work and training. Any other comments/stories/anything else you’d like to say? I’ve really enjoyed being on the Cato journey. As a customer, we joined around four and a half years ago, and the progress since then has been incredible. Big shout-out to the Cato Support team — they’re phenomenal. Always responsive, helpful, and quick to get issues escalated to the right team and resolved promptly. Thank you so much for being such a big part of our Cato Connect Community journey! We appreciate you and enjoy watching you learn and grow on Cato Connect and beyond :)89Views11likes3Comments2-arm VPN router behind Socket
I have a Cisco router from a 3rd party provider that provides access to that 3rd party providers networks. Thie router uses a 2-arm configuration with WAN and LAN interfaces. The WAN cannot be a public routed IP, it must be a private IP. The router's existing deployment has the WAN interface connected to a DMZ zone off our legacy firewall, which uses a subnet of 192.168.1.0/24 and the router's LAN interface is connected to a trusted LAN subnet of 172.29.1.0/24. The firewall does not have any inbound ports open to the VPN router's WAN interface, as the router is configured to outbound initiate the VPN tunnel. I need to move this router to sit behind the socket so I can remove the legacy firewall from our network. What would be the best way to set this up? Note that VLAN's are terminated to a L3 switch at this location, and I am not looking to move them to the socket at this time. I would also prefer to not have the 192.168.1.0/24 subnet advertised to the entire Cato network (especially ZTNA clients).47Views0likes1CommentCato Rapid7 SIEM API Integration
Followed the configuration steps in the links below, but laid an egg. I mean, the integration still isn’t working https://support.catonetworks.com/hc/en-us/articles/13975273800733-Cato-Data-Third-Party-Supported-Integrations https://docs.rapid7.com/insightidr/cato-networks/ I’ve opened tickets with both Cato and Rapid7 since each points to the other as the root cause. It’s turning into a real whodunit, fun and frustrating at the same time. If anyone has already solved this mystery, please share any insights.47Views0likes2Comments