Recent Content
Site Management API Multi-Tool Workshop
Welcome to this hands-on workshop where you'll learn to manage Cato Networks infrastructure (socket sites, network interfaces and network ranges) using three different tools in a real-world workflow. This exercise outlines the API structure for managing site configurations, and demonstrates the flexibility of the Cato API ecosystem, while teaching you when and how to use each tool for maximum efficiency. What You'll Learn By the end of this workshop, you'll be able to: Install, configure and use the Cato API Explorer (containerized web-based GUI) providing code generation including syntax for python, catocli, and CURL Install, configure and use the Cato CLI to both read and update configurations Create new Cato sites, network interfaces and add network ranges to interfaces via API Why Use Multiple Tools? In real-world scenarios, you'll often use different tools for different tasks: Tool Best For Use Case API Explorer Testing new APIs, one-off changes, learning Initial site creation, exploring API capabilities Cato CLI OS agnostic tool for bulk operations, automation scripts Updating multiple sites, generating reports cURL Generic method of calling APIs directly, troubleshooting Integrating with existing automation, minimal dependencies Prerequisites Before starting, ensure you have the following installed on your machine: Install Python Install Cato CLI Install Docker Desktop on Mac, Windows, or Linux NOTE: Manually start the docker application before checking if it is running open -a docker Validate Required Tools # 1. Docker (for API Explorer) docker --version # 2. Python 3.6+ python3 --version # 3. Cato CLI catocli --version # 4. CURL curl --version Cato API Credentials You'll need: API Token: Generated from the Cato Management Application. Refer to Generating API Keys for the Cato API. NOTE: Save the token securely (you won't be able to view it again). Account ID: Your Cato account number found in Account > Account Info or in the CMA URL, example: https://system.cc.catonetworks.com/#/account/{account_id}/ Site Management API Workshop Overview The site workshop workflow consists of four main phases: Phase 1: Create Site using Cato API Explorer (Docker Web UI) Phase 2: Retrieve Site ID using Cato CLI Phase 3: Update Interface using Cato CLI Phase 4: Retrieve Interface ID using Cato CLI Phase 5: Add Network Range using CURL from Cato API Explorer Phase 1: Create a Site Using API Explorer Step 1.1: Launch the API Explorer The Cato API Explorer is a Docker-based web application that provides an interactive GUI for testing GraphQL API calls. mkdir cato-api-explorer cd cato-api-explorer # Create docker-compose.yml cat << 'EOF' > docker-compose.yml services: cato-api-explorer: container_name: cato-api-explorer image: ghcr.io/catonetworks/cato-api-explorer:latest ports: - 8080:8080 - 8443:443 EOF # Pull and start the container docker-compose pull docker-compose up -d Step 1.2: Access the API Explorer # Open in your browser open http://localhost:8080 Step 1.3: Configure API Credentials Click on the Settings tab (gear icon) Enter your API Endpoint, API Token, and Account ID Click Save Settings Step 1.4: Create the Site Follow these steps in the API Explorer: Navigate to the GraphQL API tab and enter addSocketSite in the API Operation field Select mutation.site.addSocketSite() from the dropdown Click Edit on the addSocketSiteInput field and fill out required fields Change connectionType to SOCKET_X1600, and site name to My 1600 Site Configure the siteLocation with your desired city, state, and country Request Variables should reflect: { "accountId": "12345", "addSocketSiteInput": { "connectionType": "SOCKET_X1600", "name": "My 1600 Site", "nativeNetworkRange": "10.111.0.0/24", "siteLocation": { "city": "San Diego", "countryCode": "US", "stateCode": "US-CA", "timezone": "America/Los_Angeles" }, "siteType": "BRANCH" } } Click "Execute" and save the returned siteID. Example mutation.site.addSocketSite() screenshot in API Explorer: Phase 2: Retrieve Site ID Using Cato CLI Now that we've created the site, let's verify it exists and retrieve its ID using the Cato CLI. Step 2.1: Configure Cato CLI # Interactive configuration catocli configure Step 2.2: Search for the Site # Use help menus catocli -h catocli entity -h # Search by site name catocli entity site list -s "My 1600 Site" # Pretty print JSON output catocli entity site -p # Format as CSV catocli entity site -s "My 1600 Site" -f csv Phase 3: Update Interface Using Cato CLI Now we'll update the site's network interface configuration using syntax generated from the API Explorer. Step 3.1: List Existing Interfaces By default when creating a Cato site, the site will have one LAN interface and one WAN interface. The default LAN interface will be configured as the native range used when creating the site. # Use entityLookup to get interface info catocli query entityLookup '{ "entityInput": { "id": "12345", "type": "site" }, "type": "networkInterface" }' Step 3.2: Update the Interface In the API Explorer, configure the interface update: Navigate to GraphQL API tab and enter updateSocketInterface Select INT_7 as the interface to configure Set destType to LAN Configure subnet and localIp Request Variables should reflect: { "accountId": "12345", "siteId": "172807", "socketInterfaceId": "INT_7", "updateSocketInterfaceInput": { "destType": "LAN", "lan": { "localIp": "10.112.0.1", "subnet": "10.112.0.0/24" } } } Example mutation.site.() screenshot in API Explorer: Step 3.3: Execute with Cato CLI Copy the Cato CLI syntax from the API Explorer and execute using your siteID: catocli mutation site updateSocketInterface '{ "siteId": "12345", "socketInterfaceId": "INT_7", "updateSocketInterfaceInput": { "destType": "LAN", "lan": { "localIp": "10.112.0.1", "subnet": "10.112.0.0/24" } } }' Phase 4: Retrieve Interface ID After updating the interface, retrieve the Interface Entity ID for adding network ranges: # Retrieve interface details catocli entity networkInterface list -f csv # Or use entityLookup catocli query entityLookup '{ "entityInput": {"id": "12345", "type": "site"}, "type": "networkInterface" }' Save the Interface Entity ID for the INT_7 interface for use in Phase 5 Phase 5: Add Network Range Using cURL Finally, we'll add a network range to the INT_7 interface using a raw cURL command. Step 5.1: Configure in API Explorer In API Explorer, navigate to addNetworkRange Select the LAN_7 interface Configure network range parameters (name, subnet, VLAN, DHCP) Uncheck Mask secret key checkbox to reveal your API key Example mutation.site.() screenshot in API Explorer: Step 5.2: Execute cURL Command Copy the cURL command from the API Explorer and execute in your terminal: curl -k -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "x-API-Key: YOUR_API_KEY_HERE" \ 'https://api.catonetworks.com/api/v1/graphql2' \ --data '{ "query": "mutation siteAddNetworkRange ( $lanSocketInterfaceId:ID! $addNetworkRangeInput:AddNetworkRangeInput! $accountId:ID! ) { site ( accountId:$accountId ) { addNetworkRange ( lanSocketInterfaceId:$lanSocketInterfaceId input:$addNetworkRangeInput ) { networkRangeId } } }", "variables": { "accountId": "11362", "addNetworkRangeInput": { "dhcpSettings": { "dhcpType": "ACCOUNT_DEFAULT" }, "localIp": "10.113.0.1", "name": "Custom Network", "rangeType": "VLAN", "subnet": "10.113.0.0/24", "vlan": 123 }, "lanSocketInterfaceId": "207469" }, "operationName": "siteAddNetworkRange" }' Expected Response: Network Range ID returned { "data": { "site": { "addNetworkRange": { "networkRangeId": "UzY1NDI4Mg==" } } } } Key Takeaways When to Use Each Tool API Explorer (Web GUI): Initial testing and exploration Learning the API structure One-off changes during troubleshooting Generating cURL and Python templates Cato CLI (catocli): Bulk operations and reporting Automation scripts Quick queries from command line CSV/JSON export capabilities cURL (Raw API): Troubleshooting and calling APIs directly Minimal dependencies Custom error handling with verbose output (-v flag) Integration examples for any programming language Additional Resources Cato API Essentials - Videos Cato CLI Cato API Documentation Congratulations on Completing the Workshop! You now have hands-on experience with three powerful API tools2likes0CommentsBrownfield Deployments for Cato Network Sites
Have you ever found yourself managing dozens or even hundreds of Cato Network sites manually through the Cato Management Application (CMA), wishing there was a better way to maintain consistency, version control, and automate? Cato Brownfield Deployments (or Day 2 Operations) solves exactly this problem by enabling you to bring your existing Cato infrastructure under Terraform management without recreating everything from scratch. This comprehensive guide will walk you through the process of exporting existing Cato Network site configurations, modifying them as needed, and importing them into Terraform state for infrastructure-as-code (IaC) management. Why This Matters Version Control: Track all infrastructure changes in Git Consistency: Ensure standardized configurations across all sites Automation: Enable CI/CD pipelines for network infrastructure Disaster Recovery: Quick restoration from configuration backups Bulk Updates: Modify multiple sites simultaneously with confidence What is a Cato Brownfield Deployment? In infrastructure terminology: Greenfield Deployment: Building infrastructure from scratch with no existing resources Brownfield Deployment: Managing and updating existing infrastructure that's already running in production, in this case, sites that are already configured in the Cato Management Application (CMA). NOTE: Bulk export and import of sites for brownfield deployments apply to physical socket site deployments (X1500, X1600, X1600_LTE, X1700), as virtual socket sites for cloud deployments include separate cloud resources that are covered by terraform modules found here. For Cato Networks, a brownfield deployment means: You already have Socket sites, network interfaces, and network ranges configured in the CMA You want to start to manage, or take over the configuration of these existing resources using Terraform You don't want to delete and recreate everything (which would cause network downtime) You need to import existing configurations into Terraform state The socket-bulk-sites Terraform module, combined with the Cato CLI (catocli), makes this process straightforward and safe. Prerequisites Before starting, ensure you have the following installed on your machine: Install Terraform Install Python Install Cato CLI Install Git (optional) NOTE: It is a best practice to use a version control system to track changes in code, and configuration files, this example highlights how to use the git cli client, and github to do so. Validate Required Tools # Python 3.6 or later python3 --version # Terraform 0.13 or later terraform --version # Cato CLI tool pip3 install catocli # Git (recommended for version control) git --version Cato API Credentials You'll need: API Token: Generated from the Cato Management Application. Refer to Generating API Keys for the Cato API. NOTE: Save the token securely (you won't be able to view it again). Account ID: Your Cato account number found in Account > Account Info or in the CMA URL, example: https://system.cc.catonetworks.com/#/account/{account_id}/ Cato Brownfield Deployment Overview The Cato brownfield deployment workflow consists of four main phases: Phase 1: Export - Cato Management Application → catocli → CSV/JSON files Phase 2: Import - CSV/JSON files → Terraform State (catocli import command) Phase 3: Modify - Edit CSV/JSON files with desired changes (optional) Phase 4: Manage - Terraform State → Terraform Apply → Update CMA Components Cato CLI (catocli): Command-line tool for exporting and importing configurations socket-bulk-sites Module: Terraform module that processes CSV/JSON files Terraform State: Tracks which resources are managed by Terraform Cato Management Application: The source of truth for your actual network configuration Step-by-Step Implementation Step 1: Configure Cato CLI First, configure the CLI with your API credentials: # Interactive configuration (recommended for first-time setup) catocli configure # Or configure with environment variables export CATO_TOKEN="your-api-token-here" export CATO_ACCOUNT_ID="your-account-id" Verify Your Configuration: # View current configuration catocli configure show # List your sites to confirm access catocli entity site Step 2: Create Your Project Directory Organize your Terraform project with a clear structure: # Create project directory mkdir cato-brownfield-deployment cd cato-brownfield-deployment # Initialize git repository (optional) git init Step 3: Set Up Terraform Configuration Create your main Terraform configuration file (main.tf): terraform { required_version = ">= 0.13" required_providers { cato = { source = "catonetworks/cato" version = "~> 0.0.46" } } } provider "cato" { baseurl = "https://api.catonetworks.com/api/v1/graphql2" token = var.cato_token account_id = var.account_id } NOTE: Please refer to the following Intro to Terraform instructional video for a guide on how to set up authentication, define Terraform variables and manage environment variables like your api token, to securely initialize the Cato Terraform provider. Working with CSV Format The CSV format is ideal when you want to: Edit configurations in Excel or Google Sheets Separate site metadata from network ranges Have human-readable, easily diff-able files Export to CSV # Export all socket sites to CSV format catocli export socket_sites \ -f csv \ --output-directory=config_data_csv This creates: socket_sites.csv - Main site configuration sites_config/{site_name}_network_ranges.csv - Per-site network ranges Add CSV Module to Terraform Update your main.tf to include the CSV module: # CSV-based site management module "sites_from_csv" { source = "catonetworks/socket-bulk-sites/cato" sites_csv_file_path = "config_data_csv/socket_sites.csv" sites_csv_network_ranges_folder_path = "config_data_csv/sites_config/" } Import CSV Configuration into Terraform State # Initialize Terraform terraform init # Import existing resources into Terraform state catocli import socket_sites_to_tf \ --data-type csv \ --csv-file config_data_csv/socket_sites.csv \ --csv-folder config_data_csv/sites_config/ \ --module-name module.sites_from_csv \ --auto-approve # Review (should show no changes if import was successful) terraform plan Working with JSON Format The JSON format is ideal when you want to: Use programmatic tools to manipulate configurations Keep all configuration in a single file Work with JSON-aware editors and validation tools Export to JSON # Export all socket sites to JSON format catocli export socket_sites \ -f json \ --output-directory=config_data Best Practices 1. Version Control Everything Use a version control system to manage the changes in your configuration files, in this example, the Git client is used to track infrastructure file changes: # Initialize repository git init git add main.tf git commit -m "Initial Terraform configuration" 2. Regular Exports and Backups Create automated backup scripts to regularly export your configuration (sites_backup.sh): #!/bin/bash DATE=$(date +%Y%m%d_%H%M%S) BACKUP_DIR="backups/$DATE" mkdir -p "$BACKUP_DIR" catocli export socket_sites -f json --output-directory="$BACKUP_DIR" Troubleshooting Issue: Import Fails with "Resource Already Exists" Symptom: Error: Resource already exists in state Solution: # List all items in terraform state terraform state list # Show terraform state terraform show # Remove the resource from state and re-import terraform state rm 'module.sites_from_csv.cato_socket_site["Your Cato Site Name Here]' Issue: Plan Shows Unexpected Changes Symptom: Plan: 0 to add, 25 to change, 0 to destroy Solution: # Export fresh configuration from CMA catocli export socket_sites -f json --output-directory=config_data_verify # Compare with your current configuration diff config_data/socket_sites.json config_data_verify/socket_sites.json Conclusion Brownfield deployments for Cato Networks enable you to bring existing infrastructure under version-controlled, automated management without disruption. By following this guide, you can: Eliminate manual configuration errors through automation Maintain consistency across hundreds of sites Accelerate deployments from days to minutes Improve disaster recovery with infrastructure-as-code backups Enable collaboration through Git-based workflows Ensure compliance with standardized configurations Key Takeaways Start Small: Begin with exporting a single site, validate the process, then scale Test First: Always use terraform plan before terraform apply -parallelism=1 Version Control: Git is essential for tracking changes and enabling rollbacks Automate Backups: Regular exports provide disaster recovery capability Document Everything: Clear documentation enables team collaboration Additional Resources Cato API Essentials - Videos Cato Terraform Provider Socket-Bulk-Sites Terraform Module Cato CLI Cato API Documentation Learning Center: Using Terraform with Cato Cloud Happy Infrastructure-as-Code Management!3likes1CommentDeploying the Cato Sentinel Connector | Full Installation & Setup Guide for Azure Sentinel
In this video, we’ll walk you through the complete setup of the Cato Sentinel Connector, connecting your Cato Networks environment to Microsoft Azure Sentinel for unified visibility and smarter threat detection. What You’ll Learn: Setting up your Azure environment (Resource Group, Log Analytics Workspace, Sentinel) Creating your Cato API Key and finding your Account ID Deploying the ARM template / Azure Function App to ingest Cato Events, CEF, Audit Logs & XDR Stories Configuring data ingestion and filters for efficient log collection Installing the Sentinel Workbook to visualize Cato data Best practices and tuning tips for ingestion and workspace setup Who It’s For: Security engineers, SOC analysts, and IT professionals using Cato Networks who want to enhance visibility through Azure Sentinel. Prerequisites: Active Cato Networks account with API access Azure subscription with Sentinel enabled Permissions to deploy ARM templates and Function Apps By the end of this tutorial, you’ll have a fully operational integration between Cato Networks and Azure Sentinel, empowering your team with advanced insights and real-time threat correlation. Github repository: https://github.com/catonetworks/cato-sentinel-connect
1like0CommentsIntroduction to Terraform - From Setup to Your First Apply
Welcome to this hands-on introduction to Terraform, the powerful Infrastructure as Code (IaC) tool that lets you automate cloud resource provisioning with ease. In this video, you’ll learn: How to install Terraform on your system What Terraform state and commands are and how they work How to run your first Terraform apply to deploy real infrastructure By the end of this session, you’ll have a working Terraform environment and a solid understanding of how to manage your infrastructure efficiently and reproducibly. Perfect for: Beginners, DevOps engineers, cloud professionals, and anyone looking to get started with Infrastructure as Code. Resources mentioned: Terraform Official Documentation: https://developer.hashicorp.com/terraform/docs Terraform Download: https://developer.hashicorp.com/terraform/install
0likes0CommentsIntroducing the Cato GraphQL API Playground
Explore how the Cato GraphQL API Playground streamlines API exploration, query development, and team collaboration.In this video, you’ll learn how to: Interactively explore the GraphQL schema and available API operations Write and test queries in real time with syntax highlighting Debug and troubleshoot API responses efficiently Save query history and share examples with your team for better collaboration Whether you’re a developer, tester, or network engineer, the Cato GraphQL API Playground helps you accelerate development and improve API reliability.
0likes0CommentsIntroduction to Cato API - Setup to Your First Real API Call
Transcript: Hey, guys. It's Josh Snow and Brian Anderson. Today, we're gonna be talking about the Kato API. I'm really stoked for this because the API is extremely powerful at Kato, and there's a lot you can do with it. But we're gonna really help demystify that. And so today, we're gonna talk about what the KPI Cato API is, and then how you can make your first API call. So, Brian, take it away. Let's walk through what the key a Cato API is and some of the possibilities. Awesome. Thank you. Let's first frame the conversation about what we're looking to achieve or what the outcomes are, why we would even use the Cato API. So if you're familiar with the Cato platform, obviously, there is a a global private backbone we wanna connect to. We need to perform different types of tasks like creating sites, maybe provisioning rules, and maybe getting data out of the platform, security events, security incidents, and that sort of thing. So typically, that starts off with, you know, version control systems, and maybe you have other platforms that are out there and frameworks that would help enable the automation of provisioning configurations, like Terraform, Pulumi, or Azure DevOps, and there's there's a whole suite of tool sets that are out there. And then secondarily, there might be other teams that would use other solutions like Ansible, Chef or Puppet or Salt to provision new rules, for example, or make exceptions or modify network configurations. And then lastly, there's what do we do with the data that we have. In the Kato platform, we see all ports, all protocols, the very rich dataset. How do we make that actionable? We want to invoke in, like, an ITSM system, like a ServiceNow, maybe send data to Splunk or different business units or your network operations team or security operations team that may be interacting with your data into the platform that way. So all the a all the arrows that we see here in this diagram are APIs that would enable the interactions between these different platforms. That's awesome. All I see is possibility. And what's great today is we're gonna give you the foundational elements to generate your own API call, which will start you on your journey of building any of the things that we just showed today. Some of it's gonna be off the shelf that you can build. Others, it's gonna start to build your own. And that's what we're gonna walk you through today. Awesome. Yeah. So starting at the at the high level here, we always start with the documentation. This is our Cato management API documentation, and many folks in the industry are familiar with REST APIs or SOAP APIs historically. This is a GraphQL API. It still works as an HTTP request. You still send a post request. But in this case, the way the GraphQL works is they have the concept of queries, which is the ability to pull and retrieve information or data. And then there's something called mutations, which are the way that you would create, update, or delete different configurations. So this is the the documentation. You can feel free to kind of sift through that if you want some exciting reading on the weekend. And we're gonna do a separate video on this so that you can be able to consume that on its own if that's something you're interested in. Awesome. But we'll do separate videos also on something called the playground. And the playground is kind of an off the shelf you know, tool set that's exposed to GraphQL that lets you test your own API calls. And so we'll do one separate session on how to use this. You also may be familiar with a client called Postman, a very common API testing tool that may also help automate certain tasks and so forth. So we'll have a separate session on Postman. But today, we're gonna be covering a tool that's on github dot com slash kato networks called the Kato API Explorer. The Kato API Explorer is an application that runs on top of Docker. So there's an assumption that already have Docker installed here. We're gonna have a separate session on how to install that, but essentially essentially, all you do is download this from docker dot com and start it up, and you could then work through standing up the Keto API Explorer locally on your system to be able to use. Cool. Super easy. So initially, what we need what we can do is either download all of this all this code directly from GitHub, or really all that you need is this one file, the Docker Compose dot YAML file. So if I literally just take this one file, I could either download that from here or just copy this text. I put that into a file on my local workstation. So in a folder that I've created, I've got a Docker Compose dot YML file, and and then I've added this text to that file. So that's all that I've done so far apart from having Docker running on my system. And then what I do in a in a terminal window is I would make sure that I'm in this folder, the same folder that I'm having this file, this Docker Compose file present in. And this is gonna point to basically where we publish our container. So all that we need to do if Docker compose is running if Docker is running, we run Docker Compose pull. I could type and talk at the same time. And what this will do is tell it's gonna tell Docker to look at the file that you have, and it's gonna pull this image down from where we publish it in what's called the container registry. So now I've pulled this data this this container down. The next command that I'm gonna run is docker compose up minus d. And what that does is stands up the container, and then it runs as a background daemon process. So it continues to run on my system here. Now one important thing to note is this configuration, which is you could leave it as the default, but this is port eighty eighty. And on my workstation that I'm running on, when I go in a browser, is the port that I'm gonna be accessing this application on. So when I go in here to a browser window that this application is running, I should be able to refresh local host port eighty eighty, and that is how I get access to my application. And so, essentially, now the Explorer is running locally. The first thing that I wanna do in the Explorer is I wanna configure an API key. And what does that mean, and how how does that work? So if I if I transition back over to the Cato management application, under the resources tab in the Cato management application, under API keys, I can create a brand new key, and I can give this whatever name that I want. And I could define whether not I want this to be a view or an edit key. And then if I wanna restrict access for my API key to only come from certain IPs, I can do that, which is a good security practice. And then we can choose how long you want this key to be good for. So it's not a forever key that lives in the system. So once I create this, I hit save. I would get prompted to copy that key down. What I would do then is go add a new key here and put in my information. And there's a couple of different options for servers you can use. There's a US one. There's an island. There's an India. Essentially, it works like this. Put in the name from my key, the account ID, which, by the way, in the Kato management application is easiest found probably in the URL here, is your account ID to use. And then if I add those attributes there, whenever I save my credentials, the API Explorer will then test those credentials and make sure that they're valid. And so now I've now I've essentially connected the Kato management application to my tenant. And So now you're ready to make API calls at this point. Right? That's exactly right. That's awesome. So in the explorer, the interesting thing about this is that every time you refresh this page, it pulls down what's called our production schema. And what is that? This is all of the API calls that are supported for the Keta management application. So today, we might have, you know, fifty, a hundred, tomorrow might be five hundred. All of those are now available in an easy drop down that I could say what calls are available to me in the platform, and all that I have to do is simply select one. And what this client does, which is different from either Postman or the GraphQL playground, is it automatically renders all the parameters and all of the arguments that I would need to be able to construct one of my one of my API calls. And down here in the middle, just to share with everyone, this is every possible parameter that is available to be selected for what's called the account snapshot. Now the account snapshot is a call that gives you a snapshot of the account, all your sites, all your sockets, all your users, that sort of thing. It gives you some information about that. And I could choose here whether or not I wanna filter for certain sites. Noticed that when I selected this That's cool. It automatically drops in what's called site IDs. So I don't need to go look those up and construct my payload. But what's interesting here is if I hit execute, I now get an interactive live response back from the API. So what I've done in this few minute session here is I've just created a key, and now I'm already making API calls against the Kato management application interact Even just the sites you wanted. So if you only wanted certain sites, that's what all you would get from this specific call. That's cool. That's exactly right. So I can uncheck this, and it removes those sites from my payload. And now I get interactively back all of the sites from my entire environment. So another interesting thing here is down to this bottom right section, these are code examples. And we have a keto CLI, which we'll talk about in a moment, which just generates code examples for me incorporating my parameters that I select maybe. We also have Python examples, which I can literally copy and paste this syntax into a Python script that I run on my local system. I run it and it works, provided that I have my my authentication, my my API key in an environment variable and so forth, and it'll walk you through how to do that. And then lastly, we have even a curl example. Now curl is simply a command line way to run a web request. That's it. And so if I paste this into a terminal window and I put in my my key, which I could choose to unmask that if I wanted, can paste it into a terminal window or to a shell script. It will work. And that's exactly the same call that I just selected from from this drop down here. So it's that easy now. We are interacting and making calls against our own kind of CMA tenant. That's huge. So now you could you've gone all the way from creating the API key to whatever individual call you wanna be able to make, and then you even can add those request variables, which I've always found to be as a new API user. Like, where do I change a variable? I messed up the bracket or this. This just makes that process easy. And so once you have this, Brian, you just copy that into your terminal interface to be able to start making the API calls. Right? That's exactly right. So I can I can take this and copy this into a terminal window, or I could put it into a a bash script, or I can use the Python script, which is a very, very common programming language that a lot of people use for integrations and automation work? But the next call I'd like to talk about we already talked about kind of the account snapshot to give you kind of data about the account users and insights and so forth. But there's kind of a general purpose call called the entity lookup call. And the way that the Keto API is structured, there's a lot of the read operations you're looking for. For example, if I wanna look up my interfaces or my my site ranges, I could pick what I wanna look up, and then I can run this and it would give me a list of back of whatever thing that I'm selecting from that list. So here's all my site ranges that I have in my account, for example. This is a demo account that we're running here. I can also choose to look up maybe, you know, all of my VPN users that might be in my environment, right, Or even my sites and so forth. So this gives me a list of all of those kind of validate the response to see, alright, I'm putting together this bigger chef script or this provisioning, or these are the events I want say. So you can start to validate those here of what you're starting to put together in a bigger view by understanding these responses. Say, oh, yeah, that looks good. Now I'm going to take that code and add that to my larger playbook is you know, I know it's kind of a rookie way to say it, but is that am I is that a fair assessment? What what you want as a developer is you first wanna see what the response looks like so that when I start to go develop code, I know what object that I'm looking to get back. I know that it's data, and then there's a child node called entity lookup, data dot entity lookup dot items. This is a list. It's an array of items. So I would know what the data structure is that I'm going to be parsing my response, and then I would know how to make that useful for whatever kind of task that I'm working on, whether it's importing something or maybe exporting something and then doing something with that. Maybe I wanna pull down all the site sites that we have or lists of applications or whatever it is, this kind of helps you enable to test interactively what is that call gonna look like. Yeah. So if I were to go into maybe I'm looking up a list of sites. Let's talk about how to use this a little bit more interactively. Or maybe I'm looking for network interfaces, for example. And so this is all the network interfaces in in my environment, but I actually get a lot of these back. And I don't I don't maybe want all of these at one time, so maybe I wanna create a filter for a particular site. I wanna filter my network interfaces. So in here, this gives me the ability to kind of edit the entity input. What I wanna do is I wanna filter for a site, and I'm gonna filter for the value of site ID. And when I hit add here, notice that it adds this to the parent, and now it actually passed this down into my variables here, and it gave me Yeah. All of this down into all these different code generation tools where now if I run that request again, I only get Mac one interface that is for that site. Yeah. That's So what I did is I just created that kinda dynamic filter on the fly from the data they got back from the API. Right? Awesome. This has been super helpful, Brian. Future videos, we are going to start to stitch the stitch this together for other integrations, putting this together inside of deeper scripts so that you can then take this and really shine with the the Kato API. But this is such a cool tool. Thanks for introducing it, Brian, because it allows people like me who kind of know APIs, but now I can be really dangerous with that. And I think a lot of people are starting to it takes that intimidation level away. And so I really appreciate what what you built here, and I'm excited to try to use it and hope everyone else is too. Yep. One last little thing we wanted to cover here would be the the CLI. And so what I've done so far here is I've generated a request, and it gave me example syntax. I can simply install the Cato CLI in the terminal window by running pip three Install Kato CLI if you have Python installed in your system. Then I would run Cato CLI configure, and then that would walk me through setting up my credentials. And then if I put my credentials in, just like I got from, you know, the the Cato management application, I can now take this exact query that I got from the Kato management application and run that in the terminal window and get the exact same output. So it's it they're designed to kind of work hand in hand. That's huge. I don't I don't need to write a lot of code. I just need the command to run, and I can use the explorer that I have here to create any payload that I want to for any API. And then I literally just copy this out, drop it in a terminal, drop it in a shell script, and you you're off and running. Yeah. So then you could just start iterating, iterating, iterating. Right? That's that's huge. Alright, Brian. Thanks so much. Appreciate your time. Look for the next videos, guys. Thanks so much. Thanks, everyone. ---- Let us know what you think and what you'd like to learn more about.
0likes0CommentsYou Ask a Good Question: Top 5 Applications Per Site, by Total Bandwidth
The Ask: I’d like to be able to see the top 5 applications, per site, by total bandwidth. Basically, this graph multiple times. API Guy answer: My solution is a multi-query approach. Step 1: This appStats query to get the list of site names and their total traffic: Step 2: Iterate over each site, calling an appStats() query for each one, with the site name as the filter. Here’s an example for one site: You will need to then calculate the percentages based on the total for each site from the first query.1like0Comments