Recent Discussions
Office mode for Mac users
We have AlwaysOn policy enabled for all the users and it is causing some troubles for Mac users. Most of our users are Windows and when they come to the office behind the socket, the client detects Office Mode automatically, users do not need to enter credentials, and they get network connectivity just fine. However our Mac users would need to enter credentials in the cato client for it to detect the office mode. If they do not enter credentials, they do not have a network connection. Our Mac users are not happy with this since it does add some inconvenience when they are in the office. I am wondering if anyone has the same challenge and what are possible workarounds.15Views0likes1CommentRegarding the execution interval of the Azure Functions template for Cato log integration
I'd like to confirm something about Azure Functions processing. ■Requirements - To forward Cato SASE logs to an Azure Log Analytics workspace, I'm using the following Cato log integration template. https://github.com/catonetworks/cato-sentinel-connect/tree/main -The Azure Functions specs are as follows: OS: Linux Plan: App Service Plan Size: P1v3 Type: Custom Handler Trigger: Timer trigger (30-second interval) The following logs are targeted for integration: -CommonSecurityLog Log size: Approximately 2.5-5MB per 30 seconds (300-600MB per hour) -CatoAuditEngine_CL Log size: Less than 0.01MB per 30 seconds ■Question I'm using a 30-second timer trigger, but the actual execution interval is 2 minutes. (The execution interval can be confirmed by counting the "Functions Execution Count" metric.) Please confirm the following three points. 1. Is the change in execution interval due to a large log volume? 2. What should I do to set the execution interval to 30 seconds? Would scaling up Azure Functions be effective? 3. Even if execution takes a long time, is the log integration being executed without any problems? Are there any logs being missed? Note that in the test environment (log volume per 30 seconds is less than 0.01MB for both tables), execution is performed every 30 seconds.7Views0likes0CommentsTerraform: IPsec site creation with Responder-only and destination type FQDN possible?
Hi, see subject. When trying to setup an ipsec site (IKEv2) in responder only mode and with destination type FQDN for primary and secondary tunnel, terraform (in fact opentofu), gives this error: │ Error: Cato API error in SiteAddIpsecIkeV2SiteTunnels │ │ with cato_ipsec_site.Vienna, │ on main.tf line 73, in resource "cato_ipsec_site" "Vienna": │ 73: resource "cato_ipsec_site" "Vienna" { │ │ {"networkErrors":{"code":422,"message":"Response body {\"errors\":[{\"message\":\"input: │ variable.updateIpsecIkeV2SiteTunnelsInput.primary.tunnels[0].tunnelId is not a valid │ IPSecV2InterfaceId\",\"path\":[\"variable\",\"updateIpsecIkeV2SiteTunnelsInput\",\"primary\",\"tunnels\",0,\"tunnelId\"]}],\"data\":null}"},"graphqlErrors":[{"message":"input: │ variable.updateIpsecIkeV2SiteTunnelsInput.primary.tunnels[0].tunnelId is not a valid │ IPSecV2InterfaceId","path":["variable","updateIpsecIkeV2SiteTunnelsInput","primary","tunnels",0,"tunnelId"]}]} ╵ That appears when adding the "tunnels" section. Without that section, a deployment if possible. Obviously, the tunnels section is required. --------------------snip-------------------- connection_mode = "RESPONDER_ONLY" identification_type = "IPV4" primary = { destination_type = "FQDN" tunnels = [ { public_site_ip = "10.10.10.10" psk = "abcABC1234567!!" //last_mile_bw = { //downstream = 10 //upstream = 10 } ] } ---------------snap------------------------------------- Is that supported with the terraform provider currently? Thanks, Christian66Views0likes3CommentsSDP Users - IPV6
Hi all, We have two users, both located in Germany at the moment for holidays, who can't connect using the Cato SDP client. They get an error about the Device Posture. However, when they switch to a mobile hotspot, it will connect fine, so it's not the device posture checks? The only thing I've noticed is that both clients are getting a IPV6 address from their broadband router. In the Cato Event log I can see their device IP is a 169.254.x.x address when they try and connect and are blocked. I just wanted to check if a IPV6 address could cause an issue like this or if there's some extra config we need to do.36Views0likes1CommentEvents Filtering
Good day, I had been trying to use the catocli to pull events based on destination IP addresses and it only return 1 event, while I can see multiple matching events within the same time frame in CATO portal. I wonder if anyone had come across similar problem and had found a solution to it json query { "eventsDimension": [ { "fieldName": "dest_ip" } ], "eventsFilter": [ { "fieldName": "dest_ip", "operator": "is", "values": "5******8" } ], "eventsMeasure": [ { "aggType": "any", "fieldName": "action" }, { "aggType": "any", "fieldName": "src_ip" }, { "aggType": "any", "fieldName": "src_port" }, { "aggType": "any", "fieldName": "subnet_name" }, { "aggType": "any", "fieldName": "dest_ip" }, { "aggType": "any", "fieldName": "dest_port" } ], "eventsSort": [ { "fieldName": "action", "order": "asc" } ], "timeFrame": "last.P14D" } catocli command catocli query eventsFeed "json input from variable column" Response { "data": { "events": { "from": "2025-12-09T09:00:00Z", "id": "*******", "records": [ { "fieldsMap": { "action": "Monitor", "dest_ip": "************", "dest_port": "****", "src_ip": "*******", "src_port": "*****", "subnet_name": "**********" }, "fieldsUnitTypes": [ "none", "none", "none", "none", "none", "none" ], "flatFields": [ [ "action", "Monitor" ], [ "dest_ip", "****************" ], [ "dest_port", "************" ], [ "src_ip", "**************" ], [ "src_port", "***********" ], [ "subnet_name", "***************" ] ], "prevTimeFrame": null, "trends": null } ], "to": "2025-12-23T10:00:00Z", "total": 1, "totals": { "action": "********", "dest_ip": *****, "dest_port": *****, "src_ip": "********", "src_port": ****, "subnet_name": "***********" } } } } If anyone have any ideas, do kindly share. Thanks vm.38Views0likes1CommentDegraded Sockets in High Availability
I have multiple customers that have a LTE sim card just for the main socket. This will have the sockets identify asymmetric WAN connections causing the DEGRADED alert. What can I do to disable the DEGRADED alarm from the site? could it be possible to disable the interfaces so the asymmetric connections don't show as alarmed?46Views1like1CommentUser group specified reports
We need to schedule a daily report for users who log in from a specific user group. The report should capture all users who have logged in on a daily basis from the identified group. Kindly confirm the feasibility and share the steps or requirements to enable this reporting. Additionally, while exporting the overall users list, the respective user group details should also be included in the report. Kindly confirm the feasibility and share the required steps or prerequisites to enable this.30Views0likes1CommentHas anyone successfully queried the auditFeed endpoint using the Cato API?
I’m trying to automate daily audit/change reporting from our Cato tenant by using the auditFeed GraphQL endpoint. I can successfully authenticate and run other queries (such as accountMetrics), but every valid auditFeed request results in the following error: { "errors": [ { "message": "internal server error", "path": ["auditFeed", "timeFrame"] } ], "data": { "auditFeed": null } } Here is the minimal reproducible query: Query query TestAuditFeed($accountIds: [ID!]!, $timeFrame: TimeFrame!) { auditFeed(accountIDs: $accountIds, timeFrame: $timeFrame) { from to fetchedCount hasMore marker accounts { id } } } Variables: { "accountIds": ["<my-account-id>"], "timeFrame": { "last": "P1D" } } This request passes schema validation but the resolver returns an internal error every time. Attempts with from/to, small windows, and other valid TimeFrame shapes produce the same error. Introspection (__type) is disabled for my tenant, so I cannot check field-level definitions. Question: Has anyone successfully used auditFeed in a production Cato tenant? If so, could you share a working query + variables example, or any insight on required schema structure or known limitations? Appreciate any help in validating that this will work or if there is some issue I am running up against. Thank you.53Views0likes1Comment