Saturday, July 26, 2025

Friday, July 25, 2025

Service Now AI Agent

Business App ( CMDB )  - will also take care heigene 


ensure A2A protocol working




 Workflow



KB articles as Agents




Tool 









Thursday, July 24, 2025

MCP Vs A2A

 

A2A and the Model Context Protocol (MCP) are complementary standards for building robust agentic applications:

  • MCP (Model Context Protocol): Connects agents to tools, APIs, and resources with structured inputs/outputs. Think of it as the way agents access their capabilities.
  • A2A (Agent2Agent Protocol): Facilitates dynamic, multimodal communication between different agents as peers. It's how agents collaborate, delegate, and manage shared tasks.



langGrpah Open Agent Framework


 

Highlights
  • No different from Azurre AI Foundry or AWS bedrock or GCP Agent garden
  • Supports only parallel execution of Tasks
  • No suppor for Context Memory
  • Suports native connectors
  • Available in Private cloud


Information Systems Security Agent ( Multi Agent)

Agentic AI-powered security automation framework that performs:

  1. Threat modeling (using STRIDE, OWASP Top 10),

  2. Risk assessment (using DREAD/FAIR models),

  3. Architecture parsing (from diagrams or IaC),

  4. Compliance mapping (to NIST, ISO, SOC2),

  5. Live Azure infrastructure auditing (e.g., VNet, NSG, Key Vaults, Route Tables),

  6. Automated remediation planning (with suggested fixes),

  7. Audit-ready reporting (in PDF or dashboard format),

  8. ServiceNow integration (for CMDB, tickets, approvals).

Build this using a multi-agent system (LangGraph or Azure AI Foundry Agents) coordinated by a Supervisor Agent, with contextual memory (e.g., Azure AI Search or Weaviate). Include:

  • A system architecture diagram,

  • A step-by-step description of each agent's function and AI implementation,

  • An ROI analysis per agent (quantifying time/money saved),

  • A PowerPoint presentation summarizing all components,

  • A PNG diagram of the architecture, and

  • Exportable formats (PowerPoint, draw.io, or PlantUML if needed).




Monday, June 23, 2025

How to Create Agent in Azure AI Foundry

 https://learn.microsoft.com/en-us/azure/ai-foundry/agents/overview?context=%2Fazure%2Fai-foundry%2Fcontext%2Fcontext 



pip install azure-ai-projects pip install azure-identity


import os from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.agents.models import CodeInterpreterTool # Create an Azure AI Client from an endpoint, copied from your Azure AI Foundry project. # You need to login to Azure subscription via Azure CLI and set the environment variables project_endpoint = os.environ["PROJECT_ENDPOINT"] # Ensure the PROJECT_ENDPOINT environment variable is set # Create an AIProjectClient instance project_client = AIProjectClient( endpoint=project_endpoint, credential=DefaultAzureCredential(), # Use Azure Default Credential for authentication api_version="latest", ) code_interpreter = CodeInterpreterTool() with project_client: # Create an agent with the Bing Grounding tool agent = project_client.agents.create_agent( model=os.environ["MODEL_DEPLOYMENT_NAME"], # Model deployment name name="my-agent", # Name of the agent instructions="You are a helpful agent", # Instructions for the agent tools=code_interpreter.definitions, # Attach the tool ) print(f"Created agent, ID: {agent.id}") # Create a thread for communication thread = project_client.agents.threads.create() print(f"Created thread, ID: {thread.id}") # Add a message to the thread message = project_client.agents.messages.create( thread_id=thread.id, role="user", # Role of the message sender content="What is the weather in Seattle today?", # Message content ) print(f"Created message, ID: {message['id']}") # Create and process an agent run run = project_client.agents.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) print(f"Run finished with status: {run.status}") # Check if the run failed if run.status == "failed": print(f"Run failed: {run.last_error}") # Fetch and log all messages messages = project_client.agents.messages.list(thread_id=thread.id) for message in messages: print(f"Role: {message.role}, Content: {message.content}") # Delete the agent when done project_client.agents.delete_agent(agent.id) print("Deleted agent")

Sunday, September 15, 2024

Run Llama 3 on your laptop

Ollama is a convenient platform for the local development of open-source AI models. 

Why should you use open-source AI? \


Today, we have access to powerful Large Language Models, such as GPT-4o or Clause 3.5 Sonnet. 

But they come with 4 major problems: 



Data Privacy. When “talking” to GPT-4 you always send your data to the OpenAI server. For most companies, this is the #1 reason NOT to use AI. 


Cost. The best-performing LLMs are expensive, especially for high-volume applications. 


Dependency. Using GPT-4 or Claude means you rely on OpenAI or Anthropic. Most businesses prefer independence. 


Limited Customization. Every business has unique needs and problems. Custom solutions are crucial for many. But customizing the biggest models is possible only through Prompt Engineering. 


Let’s compare it to the open-source models: 

Full Privacy
We run open-source models locally. Which means, we don’t send the data anywhere. They can work offline! 

Lower Cost. You can use many “local” models for free. You pay for more powerful ones, but they’re much cheaper than GPT-4. 
Independence & Control. You’ve got full control over the model. Once you download it to your computer, you “own” it. 
Customization. You can fine-tune, re-train, and modify open-source LLMs to fit your specific needs.

But of course, open-source LLMs have their own limitations: 

Worse Performance. Reasoning and general performance of open-source LLMs always lag behind GPT-4. 
Integration Challenges. Integrating them requires more expertise and effort. 
Hardware costs. LLMs require high computational power. To run them for high-volume applications, you need your own GPUs. 

Running local Llama 3 with Ollama. 
All you need: Download Ollama on your local system. 
Download one of the local models on your computer using Ollama. For example, if I want to use Llama3, I need to open the terminal and run: 

$ ollama run llama3 

If it’s the first time you use the model, Ollama will first download it. 
Because it has 8B parameters, it’ll take a while. 
 Once you download the model, you can use it through Ollama API. 
To install Ollama API, run the following command: 
 $ pip install ollama 

And with these steps, you’re ready to run the code from this article. 

 In this article, we’ll explore how to use open-source Large Language Models (LLMs) with Ollama. We’ll go through the following topics: Using open-source models with Ollama. 

The importance of the system prompt. Streaming responses with Ollama. The practical applications of the LLM temperature. 

The usage and limitations of the max tokens parameter. Replicating “creative” responses with the seed parameter

Getting the Simple Response

import ollama

model = "llama3"
response = ollama.chat(
model=model,
messages=[
{"role": "user", "content": "What's the capital of Poland?"}
]
)
print(response["message"]["content"])

## Prints: The capital of Poland is Warsaw (Polish: Warszawa).
  1. import ollama to use Ollama API
  2. model = "llama3 to define the model we want to use
  3. ollama.chat() to get the response. We used 2 parameters:
    model that we defined before
    messages where we keep the list of messages

Sunday, February 18, 2024

OLTP and OLAP

When to use OLAP vs. OLTP Online analytical processing (OLAP) and online transaction processing (OLTP) are two different data processing systems designed for different purposes. OLAP is optimized for complex data analysis and reporting OLTP is optimized for transactional processing and real-time updates.

Tuesday, February 13, 2024

test

1. snowflake pvt link 1) Gary and Justin to followup with GIS team 2) Swamy to document the 3 options with Pros and Cons diucssed 3) What is the data classification and volume? What is the data stored in snowflake next week - setting up followup meeting 2. monday.con network integration Dev team want the bi-directional api communication wher lambda /EKS to call monday.com API and also monday.com to call Lamba/EKS

Monday, January 29, 2024

Cyber Security Standards - Risk Based Framework

Purpose

  • The Risk Based Framework (RBF) is a risk classification system developed by the Enterprise Cyber Security (ECS) department of the Cyber Risk Management team. 
  • ECS policy is intended to protect the firm in an evolving threat landscape, regardless of changes in technology or business practices. 
  • Even if specific terminology or scenarios are not part of the text, it is expected that you will exercise sound reasoning and judgment to adhere to the intent of stated requirements, practices, and implementations in both letter and spirit.


Scope

  • All systems that are listed in the IT Service Manager (ITSM) application (e.g., ServiceNow), are required to have an RBF classification. 
  • All systems where the lifecycle stage is ‘Concept’, ‘Acquisition/Development’, or ‘Retired’ are not in scope.