Recent Content
Bypassing Cato via WAN Bypass and Split Tunnel
We need to add around 200 subnets to bypass Cato. My understanding is that they need to be added to all sites under the Site Configuration/Router/Bypass/Destination and for all SDP users via Access/Client Access Control/Split Tunnel policy. We have nearly 90 sites. Manually adding 200 subnets to 90 sites doesn't seem like a good time. Is this possible via the API? If so, can you point me toward the correct commands.8Views1like1CommentCan Cato API - AuditFeed be used in S3 integration?
Hi Team, A customer is trying to push Audit trail logs to the Amazon S3 integration, looking at the documentation I do not see how this is possible, I was wondering if there is any way to accomplish this or if it required an RFE.9Views0likes1CommentHow to Delete VPN Users via GraphQL API
Greetings, I'm working on automating user cleanup and am attempting to delete inactive VPN users via the Cato API. According to the API conventions, I assumed the following mutation would work to remove users from our account: Sorry for the poor formatting. mutation deleteEntities($accountID: ID!, $entityIDs: [ID!]!) { deleteEntities(accountID: $accountID, entityIDs: $entityIDs) { success failed { userID reason } } } I'm calling it in Python with: delete_variables = { "accountID": account_id, "entityIDs": [uid] } delete_response = requests.post(API_URL, headers=HEADERS, json={ "query": delete_mutation, "variables": delete_variables }) However, I receive the following error in the response: { "errors": [ { "message": "Cannot query field 'deleteEntities' on type 'Mutation'.", "extensions": { "code": "GRAPHQL_VALIDATION_FAILED" } } ], "data": null } What I am trying to figure out is: Is deleteEntities a valid mutation for deleting VPN users? If not, what is the correct GraphQL mutation for deleting users? Thank you guys!10Views1like1CommentReporting the wrong category goes nowhere
As per https://support.catonetworks.com/hc/en-us/articles/4413280530449-Customizing-the-Warning-Block-Page: "The Cato Security team regularly reviews reported wrong categories and validates that the content for the category is correct. When websites or applications belong to the wrong category, the Cato Security team updates the definition of the category." Not so much. I just went through the last two months of such reports (filter for "Sub-Type Is Misclassification" in the Events log) and found 31 such requests from our users - most were for perfectly legit sites that for some reason were categorized as "Porn". And they still are - every single one of them. If the Cato security team is indeed not reviewing these submissions as originally intended, it would be great if that was communicated so that we can remove that misleading reporting link and take care of the Brightcloud submissions ourselves.30Views0likes2CommentsQuestion regarding EntityID
Hi Team, We are working with a customer who needs to retrieve a list of users whose last connection exceeds one month. As advised by our Cato regional Sales Engineer, we are attempting to achieve this using the API in two steps: Use query entityLookup to obtain the EntityID (userID) Use query accountSnapshot to retrieve each user's last connection timestamp However, we're encountering a challenge due to API rate limits. The entityLookup query is limited to 30 requests per minute (or 1500 over 5 hours), which makes it impractical to retrieve EntityIDs for all 2600+ users in a reasonable timeframe. Below is the Python code we are currently using in our attempt: import requests import json from datetime import datetime, timedelta # Cato GraphQL endpoint URL url = "https://api.catonetworks.com/api/v1/graphql2" # HTTP headers와 API key headers = { "Content-Type": "application/json", "x-api-key": "Our client API key" } # Query 1: EntityID(UserID) API 명령문 query1 = """ query AllMyRemoteUsers { entityLookup(accountID:4265, type: vpnUser) { items { entity { id name } description } total } } """ # Query 1 실행 payload = { "query": query1 } response = requests.post(url, json=payload, headers=headers) data = response.json() # EntityID 추출 userIDs = [] try: items = data['data']['entityLookup']['items'] for item in items: user_id = int(item['entity']['id']) userIDs.append(user_id) except KeyError as e: print(f"Error parsing response: {e}") print(json.dumps(data, indent=2)) print(userIDs) # GraphQL EntityID list string으로 생성 user_id_list_str = ",".join(str(uid) for uid in userIDs) print("EntityID 추출 완료") # Query 2: accountSnapshot API 명령문 query2 = f""" query accountSnapshot {{ accountSnapshot(accountID: 4265) {{ users(userIDs:[{user_id_list_str}]) {{ info {{ name email phoneNumber status authMethod origin }} lastConnected version }} }} }} """ # Query 2 실행 payload = { "query": query2 } response = requests.post(url, json=payload, headers=headers) from datetime import datetime, timedelta # query2 Json reponse 파싱 result = response.json() # 한달간 접속이력이 없었던 사용자 정보 출력 cutoff_date = datetime.utcnow() - timedelta(days=30) import csv # Prepare list to hold all rows to be saved csv_rows = [] try: users = result['data']['accountSnapshot']['users'] for user in users: last_connected_str = user.get('lastConnected') if last_connected_str: last_connected = datetime.strptime(last_connected_str, "%Y-%m-%dT%H:%M:%SZ") if last_connected > cutoff_date: name = user['info']['name'] email = user['info']['email'] csv_rows.append([name, email, last_connected.strftime("%Y-%m-%d %H:%M:%S")]) except KeyError as e: print(f"Error extracting user data: {e}") # Save to CSV csv_file_path = "한달간 접속이력 없는 사용자.csv" with open(csv_file_path, mode='w', newline='', encoding='utf-8') as file: writer = csv.writer(file) writer.writerow(["Name", "Email", "Last Connected"]) writer.writerows(csv_rows) print(f"\nCSV file이 저장되었습니다: {csv_file_path}") On line 57, you can see that we need to put all the EntityID(UserID) to check each Users Last connection info. But because of entityLookup's limit, it only put 30 SDP user's EntityID. Could you please provide us if there is any other way to get all the EntityID(userID) by using API so we can list the users according to the Last connection? Best regards,11Views0likes0CommentsVoices Behind the Stack: Nick and Jack of Redner’s
This month, we’re spotlighting two IT leaders who have been keeping a multi-location retail operation at the forefront of cybersecurity for over 20 years and doing it with unmatched clarity, curiosity, and consistency. Meet Nick Hidalgo (aka NickH), VP of IT, and Jack Senesap (aka JackSenesap), Director of Infrastructure and Security at Redner’s, a locally owned and family-oriented retail food company in the US. Their secret? A passion for unifying complexity, a love of visibility, and a belief that the right tools and the right people make all the difference. “We always know where our users are. We can deny access to things by default. That’s huge.” – Jack “It’s the first tool I look at in the morning. Everything’s in one place.” – Nick These two were early adopters of SASE from way back when it still sounded like just another buzzword. What changed their minds? Visibility. Simplicity. And the sense that this shift actually reduced complexity instead of adding more. They chose Cato Networks for its performance and security and stayed because it became a trusted part of how they work. “Now we have the resources to continue to improve.” Why these two stand out: They’re always pushing forward: from expanding their TLSi reporting to exploring orchestration and automation. They’re deeply curious about AI: not just how it can help, but how it might reshape their roles. They’re passionate about their industry and always looking for ways to do more. Off the clock? Nick is out on the lake or at the gym. Jack is tearing up the trails on his mountain bike or shooting hoops with a crew of all ages. And fun fact: Jack once won a car at a software user conference. (Seriously.) “Security never sleeps” Jack says, and hearing about everything he’s accomplishing at work, apparently neither does he. Huge thanks to Nick and Jack for their time, insights, and everything they do to keep their organization secure and forward-looking. For more Redner’s fun – check out this nifty customer story here.10Views1like0CommentsTLS Inspection and RBI
Hello, I'm new on Cato Cloud and I don't understand the behavior of the Security feature... I have created a local SDP user and assigned it a license, I'm able to connect to the tenant through the client. I've enabled the Internet Firewall, TLS inspection and RBI : Split tunneling is not enabled. I just wanted to test RBI, all other internet traffic is blocked : But when I access https://rbicheck.com which is an uncategorised website, sometimes the site isn't isolated at all like in the simulator, the automatic download is done and the certificate isn't replaced. And sometimes, the website is blocked like any other website : I don't know if I'm missing something, I understood that the changes I make on the CMA takes a few minutes to be acknowledged, the logs aren't helping me... I would be very thankful if someone could help me96Views0likes3CommentsBlocking icloud private relay "nicely"
I would like to block "icloud private relay" in such a way that the user would be notified and able to continue without icloud private relay. Apple's recommended way to do this is to block DNS requests to mask.icloud.com and mask-h2.icloud.com so a "no error/no answer" or NXDOMAIN response is returned. This alerts the users that they either need to disable private relay or choose another network. Details are here: Prepare your network or web server for iCloud Private Relay - iCloud - Apple Developer Is there a way to configure this using only Cato? I cannot see how to create a custom DNS rule to block specific queries, and I cannot see how to create a custom IPS rule either. Is there a recommended way to do this? What are others doing? I am in a Windows shop. I could redirect DNS queries to a Windows DNS server and use DNS query filtering, but would rather do a Cato only solution if possible. Per Apple: Some enterprise or school networks might be required to audit all network traffic by policy, and your network can block access to Private Relay in these cases. The user will be alerted that they need to either disable Private Relay for your network or choose another network. The fastest and most reliable way to alert users is to return either a "no error no answer" response or an NXDOMAIN response from your network’s DNS resolver, preventing DNS resolution for the following hostnames used by Private Relay traffic. Avoid causing DNS resolution timeouts or silently dropping IP packets sent to the Private Relay server, as this can lead to delays on client devices. mask.icloud.com mask-h2.icloud.com33Views1like2Comments- 12Views1like0Comments