Recent Discussions
Mismatch in User Count: Cato Report vs GraphQL API Output
This is to update that we are encountering an issue with the User GraphQL query. For example, when we generate a manual report from Cato, we receive approximately 1,750 users. However, when fetching data via the API, we are only getting around 768 users. It appears that the API is returning only users with an active Cato connection. We are not receiving data for assets or users that are currently not connected. Could you please confirm if this is an expected limitation of the API, or if there is a way to retrieve all users, including those that are not currently not connected?Mukeshkumar202 days agoMaking Connections63Views0likes10CommentsI could use some help with a Powershell script for the events feed.
For some work I am doing trying to track DHCP events, I have been looking at the Events Feed API and am having trouble getting code working in Powershell. I know my API key/Account ID are good because I have other scripts/tasks running daily, but I am struggling on the event feed. I would rather avoid Python because I already have other things happening in Powershell and am not very comfortable in Python. I think I have events enabled in my account correctly. I asked ChatGPT to review my efforts and add debugging and this is my current script. It returns 0 events. you can toggle $DEBUG_MODE = $true/$false as needed Can someone let me know if you return results with this code? thanks. # start script ----------------------------------------------------------------------------- # ====== CONFIGURATION ====== $API_URL = "https://api.catonetworks.com/api/v1/graphql2" $API_KEY = "YOUR_CATO_API_KEY" $ACCOUNT_ID = "YOUR_ACCOUNT_ID" $DEBUG_MODE = $true $MAX_LOOPS = 3 # =========================== $query = @" query EventsFeed(`$accountIDs: [ID!], `$marker: String) { eventsFeed(accountIDs: `$accountIDs, marker: `$marker) { marker fetchedCount accounts { id records { time fieldsMap } } } } "@ function Write-DebugLog { param( [string]$Message, $Data = $null ) if (-not $DEBUG_MODE) { return } Write-Output "[DEBUG] $Message" if ($null -ne $Data) { try { if ($Data -is [string]) { Write-Output $Data } else { $json = $Data | ConvertTo-Json -Depth 20 Write-Output $json } } catch { Write-Output "[DEBUG] Could not serialize debug data." Write-Output ($Data | Out-String) } } Write-Output ("=" * 80) } function Print-Event { param($Record) $timeStr = $Record.time try { $dt = [datetime]::Parse($Record.time) $timeStr = $dt.ToUniversalTime().ToString("yyyy-MM-dd HH:mm:ss 'UTC'") } catch {} $f = $Record.fieldsMap Write-Output "[$timeStr] $($f.event_type) / $($f.event_sub_type): $($f.message)" Write-Output " User: $($f.user_display_name)" Write-Output " App: $($f.application)" Write-Output " Src: $($f.src_ip)" Write-Output " Dst: $($f.dest_ip)" Write-Output ("-" * 80) } function Fetch-Events { $headers = @{ "x-api-key" = $API_KEY "Content-Type" = "application/json" } $marker = "" $totalEvents = 0 $loopCount = 0 while ($true) { $loopCount++ if ($loopCount -gt $MAX_LOOPS) { Write-Output "[INFO] Reached MAX_LOOPS limit ($MAX_LOOPS)." break } $variables = @{ accountIDs = @($ACCOUNT_ID) marker = $marker } $bodyObject = @{ query = $query variables = $variables } $body = $bodyObject | ConvertTo-Json -Depth 20 try { Write-DebugLog "Request URI" $API_URL Write-DebugLog "Request headers" @{ "x-api-key" = "***REDACTED***" "Content-Type" = "application/json" } Write-DebugLog "Request variables" $variables Write-DebugLog "Request body object" $bodyObject Write-DebugLog "Request body JSON" $body $response = Invoke-RestMethod ` -Uri $API_URL ` -Method Post ` -Headers $headers ` -Body $body ` -TimeoutSec 30 Write-DebugLog "Parsed API response" $response } catch { Write-Error "[ERROR] API request failed: $($_.Exception.Message)" if ($_.ErrorDetails -and $_.ErrorDetails.Message) { Write-Output "[DEBUG] ErrorDetails:" Write-Output $_.ErrorDetails.Message Write-Output ("=" * 80) } break } if ($response.errors) { Write-Error "[ERROR] API returned GraphQL errors." Write-DebugLog "GraphQL errors" $response.errors break } if (-not $response.data) { Write-Output "[DEBUG] Response has no 'data' property." Write-DebugLog "Full parsed response" $response break } $feed = $response.data.eventsFeed if (-not $feed) { Write-Output "[DEBUG] Response has no 'data.eventsFeed'." Write-DebugLog "Full parsed response" $response break } Write-DebugLog "eventsFeed object" $feed $batchCount = 0 if (-not $feed.accounts) { Write-Output "[DEBUG] eventsFeed.accounts is null or empty." } else { Write-Output "[DEBUG] Number of accounts returned: $($feed.accounts.Count)" } foreach ($account in $feed.accounts) { Write-Output "[DEBUG] Inspecting account id: $($account.id)" if (-not $account.records) { Write-Output "[DEBUG] No records returned for this account." continue } Write-Output "[DEBUG] Records returned for account $($account.id): $($account.records.Count)" foreach ($record in $account.records) { if ($DEBUG_MODE -and $batchCount -lt 3) { Write-DebugLog "Sample record" $record if ($record.fieldsMap) { Write-DebugLog "Sample fieldsMap keys" ($record.fieldsMap.PSObject.Properties.Name) } } Print-Event $record $totalEvents++ $batchCount++ } } Write-Output "[INFO] Batch fetched: $($feed.fetchedCount)" Write-Output "[DEBUG] Batch printed: $batchCount" Write-Output "[DEBUG] Next marker: $($feed.marker)" if (($feed.fetchedCount -eq 0) -or [string]::IsNullOrWhiteSpace($feed.marker)) { Write-Output "[INFO] Stopping because fetchedCount is 0 or marker is empty." break } $marker = $feed.marker } Write-Output "[INFO] Total events retrieved: $totalEvents" } Fetch-Events # end script -----------------------------------------------------------------------------ddaniel9 days agoStaying Involved61Views1like3CommentsCustom Category creation via API/Terraform
I need to create `Custom Category` via code. Is there API/Terraform resource available for this? I couldn't find it in the docs.DevS17 days agoJoining the Conversation229Views1like6CommentsGetting the DHCP Pools information via API
I need to get the informations under DHCP pools to monitor the percentages of each subnet per Site Socket. However, I am having issue when pulling the "dhcpPools" and saying that permission denied error. Is there a query for graphql that can call this such informations in CATO? this is my query: query dhcpPools($accountID: ID!, $siteId: ID!, $protoId: ID!, $search: String) { dhcpPools( accountID: $accountID siteId: $siteId protoId: $protoId search: $search ) { dhcpPools { ...DhcpPoolData __typename } __typename } } fragment DhcpPoolData on DhcpPool { subnetRange { ...EntityData __typename } dhcpRange { ...EntityData __typename } allocatedIPs availableIPs __typename } fragment EntityData on Entity { id type name __typename } this is my variable: { "accountID": "2015", "siteId": "105762", "protoId": "1000000070", "search": "" }Jaycen24 days agoJoining the Conversation54Views0likes2CommentsUsing Graphql to query statistic of LastMilePacketLoss
I am using the syntax below to query statistic of LastMilePacketLoss , but the response does not include any data for LastMilePacketLoss. Request URL: https://api.catonetworks.com/api/v1/graphql2 Request Body: query accountMetrics($accountID: ID!, $timeFrame: TimeFrame!, $groupInterfaces: Boolean, $groupDevices: Boolean, $siteIDs: [ID!]) { accountMetrics( accountID: $accountID timeFrame: $timeFrame groupInterfaces: $groupInterfaces groupDevices: $groupDevices ) { id from sites(siteIDs: $siteIDs) { id interfaces { name } info { sockets { id isPrimary } } metrics { bytesUpstream bytesDownstream flowCount } name } timeseries(labels: lastMilePacketLoss) { sum units label } to } } Response: { "data": { "accountMetrics": { "id": "xxxx", "from": "2026-03-01T00:00:00Z", "sites": [ { "id": "xxxxx", "interfaces": [ { "name": "Primary-WAN" }, { "name": "Secondary-WAN" } ], "info": { "sockets": [ { "id": "xxxxx", "isPrimary": false }, { "id": "xxxxx", "isPrimary": true } ] }, "metrics": { "bytesUpstream": 234144508140, "bytesDownstream": 464289852590, "flowCount": 5274 }, "name": "xxxxxx" } ], "timeseries": [ { "sum": 0, "units": "percent", "label": "sitePacketsDiscardedDownstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "bytesTotal" }, { "sum": 0, "units": "bytes", "label": "bytesDownstream" }, { "sum": 0, "units": "packets", "label": "packetsDiscardedUpstream" }, { "sum": 0, "units": "percent", "label": "lostUpstreamPcnt" }, { "sum": 0, "units": "percent", "label": "lostDownstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "siteDownstreamThroughputMax" }, { "sum": 0, "units": "bytes", "label": "bytesDownstreamMax" }, { "sum": 0, "units": "packets", "label": "lostUpstream" }, { "sum": 0, "units": "count", "label": "hostLimit" }, { "sum": 0, "units": "ms", "label": "jitterUpstream" }, { "sum": 0, "units": "bytes", "label": "siteBandwidthLimitDownstream" }, { "sum": 0, "units": "bytes", "label": "bytesUpstream" }, { "sum": 0, "units": "packets", "label": "lostDownstream" }, { "sum": 0, "units": "ms", "label": "rtt" }, { "sum": 0, "units": "seconds", "label": "tunnelAge" }, { "sum": 0, "units": "count", "label": "hostCount" }, { "sum": 0, "units": "packets", "label": "packetsDiscardedDownstream" }, { "sum": 0, "units": "score", "label": "health" }, { "sum": 0, "units": "ms", "label": "jitterDownstream" }, { "sum": 0, "units": "percent", "label": "packetsDiscardedUpstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "siteUpstreamThroughputMax" }, { "sum": 0, "units": "bytes", "label": "siteBandwidthLimitUpstream" }, { "sum": 0, "units": "bytes", "label": "siteDailyP95" }, { "sum": 0, "units": "count", "label": "flowCount" }, { "sum": 0, "units": "packets", "label": "packetsUpstream" }, { "sum": 0, "units": "packets", "label": "packetsDownstream" }, { "sum": 0, "units": "percent", "label": "sitePacketsDiscardedUpstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "bytesUpstreamMax" }, { "sum": 0, "units": "percent", "label": "packetsDiscardedDownstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "bytesDownstreamMax" }, { "sum": 0, "units": "packets", "label": "lostUpstream" }, { "sum": 0, "units": "count", "label": "hostLimit" }, { "sum": 0, "units": "ms", "label": "jitterUpstream" }, { "sum": 0, "units": "bytes", "label": "siteBandwidthLimitDownstream" }, { "sum": 0, "units": "bytes", "label": "bytesUpstream" }, { "sum": 0, "units": "packets", "label": "lostDownstream" }, { "sum": 0, "units": "ms", "label": "rtt" }, { "sum": 0, "units": "seconds", "label": "tunnelAge" }, { "sum": 0, "units": "count", "label": "hostCount" }, { "sum": 0, "units": "packets", "label": "packetsDiscardedDownstream" }, { "sum": 0, "units": "score", "label": "health" }, { "sum": 0, "units": "ms", "label": "jitterDownstream" }, { "sum": 0, "units": "percent", "label": "packetsDiscardedUpstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "siteUpstreamThroughputMax" }, { "sum": 0, "units": "bytes", "label": "siteBandwidthLimitUpstream" }, { "sum": 0, "units": "bytes", "label": "siteDailyP95" }, { "sum": 0, "units": "count", "label": "flowCount" }, { "sum": 0, "units": "packets", "label": "packetsUpstream" }, { "sum": 0, "units": "packets", "label": "packetsDownstream" }, { "sum": 0, "units": "percent", "label": "sitePacketsDiscardedUpstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "bytesUpstreamMax" }, { "sum": 0, "units": "percent", "label": "packetsDiscardedDownstreamPcnt" }, { "sum": 0, "units": "percent", "label": "sitePacketsDiscardedDownstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "bytesTotal" }, { "sum": 0, "units": "bytes", "label": "bytesDownstream" }, { "sum": 0, "units": "packets", "label": "packetsDiscardedUpstream" }, { "sum": 0, "units": "percent", "label": "lostUpstreamPcnt" }, { "sum": 0, "units": "percent", "label": "lostDownstreamPcnt" }, { "sum": 0, "units": "bytes", "label": "siteDownstreamThroughputMax" } ], "to": "2026-03-12T23:59:59Z" } } }Soon1 month agoJoining the Conversation55Views0likes1CommentAre there any APIs for local/client information?
Are there any approved ways to query Cato SDP client information on the local workstation? (FYI - My clients are 90% Windows). For example I would like to query things like: session info: public IP address (before Cato), Public IP address after Cato, session time, stats, any DEM/session quality info, connect/disconnect events, other(?) I don't know if this is available in any supported way. I had a local API with my last VPN vendor and found it useful. I don't cuirrent know if this is available and/or if anyone else would find it useful. I can use the current graph API to go to the cloud - find my session, and get details, but wondered if any of this is available locally.ddaniel1 month agoStaying Involved80Views0likes2CommentsPermission errors when testing Cato API with Python
HI all, I am currently working on a project to automate workflows in Cato with Python. I've already set and reviewed my API permissions and they should already inherit my account which is able to edit and view most of the resources. However, I still get this error: HTTP 200 { "errors": [ { "message": "permission denied", "path": [ "licensing", "licensingInfo" ], "extensions": { "code": "Code104" } } ], "data": { "licensing": { "licensingInfo": null } } } I've been scouting the documentation on specific troubleshooting steps but I couldn't seem to find the answers i'm looking for. Any chance some folks could give me a quick guide on how to ensure I get the right permissions for my API keys? This is the sample script i'm testing btw, it is to pull available licensing information for monitoring. API_KEY = os.getenv("CATO_API_KEY") API_URL = "https://api.catonetworks.com/api/v1/graphql2" QUERY = """ { licensing(accountId: <ID_HERE>) { licensingInfo { globalLicenseAllocations { ztnaUsers { total allocated available } } } } } """ async def main(): headers = { "x-api-key": API_KEY, "Content-Type": "application/json" } async with aiohttp.ClientSession(headers=headers) as session: async with session.post(API_URL, json={"query": QUERY}) as resp: print("HTTP", resp.status) print(json.dumps(await resp.json(), indent=4)) asyncio.run(main())SolvedElmark1 month agoJoining the Conversation232Views0likes6CommentsAPI for Creating Users in CMA
We don’t have an IdP environment, so we need to manually provision a large number of users in CMA. I couldn’t find any API call in the API Reference that would allow us to do this. Is there an API that can be used to create/register users? I apologize if I have overlooked it in the documentation.AKH2 months agoJoining the Conversation71Views0likes1CommentRegarding the execution interval of the Azure Functions template for Cato log integration
I'd like to confirm something about Azure Functions processing. ■Requirements - To forward Cato SASE logs to an Azure Log Analytics workspace, I'm using the following Cato log integration template. https://github.com/catonetworks/cato-sentinel-connect/tree/main -The Azure Functions specs are as follows: OS: Linux Plan: App Service Plan Size: P1v3 Type: Custom Handler Trigger: Timer trigger (30-second interval) The following logs are targeted for integration: -CommonSecurityLog Log size: Approximately 2.5-5MB per 30 seconds (300-600MB per hour) -CatoAuditEngine_CL Log size: Less than 0.01MB per 30 seconds ■Question I'm using a 30-second timer trigger, but the actual execution interval is 2 minutes. (The execution interval can be confirmed by counting the "Functions Execution Count" metric.) Please confirm the following three points. 1. Is the change in execution interval due to a large log volume? 2. What should I do to set the execution interval to 30 seconds? Would scaling up Azure Functions be effective? 3. Even if execution takes a long time, is the log integration being executed without any problems? Are there any logs being missed? Note that in the test environment (log volume per 30 seconds is less than 0.01MB for both tables), execution is performed every 30 seconds.gaetansimo2 months agoMaking Connections80Views0likes1CommentTerraform: IPsec site creation with Responder-only and destination type FQDN possible?
Hi, see subject. When trying to setup an ipsec site (IKEv2) in responder only mode and with destination type FQDN for primary and secondary tunnel, terraform (in fact opentofu), gives this error: │ Error: Cato API error in SiteAddIpsecIkeV2SiteTunnels │ │ with cato_ipsec_site.Vienna, │ on main.tf line 73, in resource "cato_ipsec_site" "Vienna": │ 73: resource "cato_ipsec_site" "Vienna" { │ │ {"networkErrors":{"code":422,"message":"Response body {\"errors\":[{\"message\":\"input: │ variable.updateIpsecIkeV2SiteTunnelsInput.primary.tunnels[0].tunnelId is not a valid │ IPSecV2InterfaceId\",\"path\":[\"variable\",\"updateIpsecIkeV2SiteTunnelsInput\",\"primary\",\"tunnels\",0,\"tunnelId\"]}],\"data\":null}"},"graphqlErrors":[{"message":"input: │ variable.updateIpsecIkeV2SiteTunnelsInput.primary.tunnels[0].tunnelId is not a valid │ IPSecV2InterfaceId","path":["variable","updateIpsecIkeV2SiteTunnelsInput","primary","tunnels",0,"tunnelId"]}]} ╵ That appears when adding the "tunnels" section. Without that section, a deployment if possible. Obviously, the tunnels section is required. --------------------snip-------------------- connection_mode = "RESPONDER_ONLY" identification_type = "IPV4" primary = { destination_type = "FQDN" tunnels = [ { public_site_ip = "10.10.10.10" psk = "abcABC1234567!!" //last_mile_bw = { //downstream = 10 //upstream = 10 } ] } ---------------snap------------------------------------- Is that supported with the terraform provider currently? Thanks, ChristianDeckel3 months agoJoining the Conversation160Views0likes3Comments