Saturday, April 18, 2026

DOCKER ARG instruction as opposed to ENV instruction

 In Docker, ARG and ENV are used to define environment variables. The ARG instruction defines variables that users can pass to the builder at build-time. Unlike ENV (Environment Variables), values do not persist in the final image and are not available to the container once it is running.


Key Characteristics
  • Build-Time Only: Variables defined with are only accessible during the image creation process (e.g., within or commands).
  • Command Line Overrides: You can pass or override values using the flag with the command.
  • Default Values: You can specify a default value in the Dockerfile (e.g., ) which is used if no value is passed during the build.
  • Layer Visibility: While they don't persist at runtime, values are visible in the image's history via , so they should never be used for secrets like passwords or API keys.
Common Use Cases
  • Version Management: Specifying versions for base images (e.g., ) or package installations.
  • Build Customization: Enabling or disabling specific features or configurations based on the build environment (e.g., dev vs. prod).
  • Metadata: Storing build-specific information like build dates or commit hashes as labels. 
Scoping Rules
  • Stage Local: An is only available in the build stage where it is defined. In
    multi-stage builds

    , you must re-declare the in each stage if you need to use it there.
  • Global Scope: An placed before the instruction is in the global scope and can be used to parameterize the command, but it must be re-declared after to be used in later instructions.

Core concepts in Agentic AI

 This is a deep and rapidly evolving field, so the "concepts" are not limited to a single definition.

At its core, Agentic AI refers to an AI system that is not merely a passive responder (like a chatbot) but an active, goal-directed entity that can autonomously plan, reason, execute a series of actions, and self-correct to achieve a desired outcome.

To properly list the concepts, I will break them down into four categories: Core Architecture, Operational Processes, Advanced Systems, and Control/Safety.


🧠 1. Core Architectural Concepts (The Components)

These are the fundamental building blocks necessary for an AI system to exhibit agency.

1. Agent Framework / Orchestrator

  • Concept: The central controller or "brain" of the system. It manages the workflow, takes the high-level goal, and orchestrates the interactions between the memory, tools, and planning module.
  • Function: It prevents the LLM from hallucinating or losing track of the main goal by enforcing a structured thinking process (e.g., Plan $\rightarrow$ Execute $\rightarrow$ Observe $\rightarrow$ Critique).

2. Memory Systems

  • Concept: Unlike a traditional LLM, which has limited context window memory, an agent needs sophisticated memory to retain information across hours or days of work.
  • Types:
    • Short-Term Memory (Context Window): The immediate context, scratchpad, or current turn in the conversation.
    • Long-Term Memory (Vector Database/Knowledge Graph): Stored, searchable information about past interactions, external documents, or domain knowledge.
    • Episodic Memory: The agent's ability to remember the context, sequence, and emotional tone of past complex tasks.

3. Tool Use / Function Calling

  • Concept: The ability for the AI to interact with the outside world. This is what separates an LLM from an agent.
  • Examples: Instead of just saying "I can check the weather," the agent executes a real function call (weather_api(city='NYC')). These tools can include APIs, databases, code interpreters, or external software interfaces.

4. Planning Module (Task Decomposition)

  • Concept: The agent cannot solve a massive problem in one step. The planning module takes a complex goal ("Book me a multi-day business trip to London") and breaks it down into a sequential, manageable list of steps ("1. Check dates. 2. Search flights. 3. Search hotels. 4. Compile itinerary.").

🔄 2. Operational Concepts (The Process Cycle)

These concepts describe how the agent operates and reasons about its actions.

5. ReAct (Reasoning + Action)

  • Concept: One of the most foundational frameworks in agent design. It forces the LLM to explicitly output its internal Thought (reasoning), select an Action (tool use), and observe the Observation (the result of the tool).
  • Cycle: Thought $\rightarrow$ Action $\rightarrow$ Observation $\rightarrow$ New Thought.

6. Reflection / Self-Correction

  • Concept: The ability of the agent to pause after an action, evaluate the result, and ask itself: "Did that work? Was that the best path? What should I try next?"
  • Importance: This is what makes an agent robust. If a tool fails or provides unexpected data, the reflection mechanism allows the agent to pivot and retry or adjust its plan, rather than simply failing.

7. Iterative Execution / Looping

  • Concept: An agent doesn't run a script once; it enters a loop. It executes a set of actions, gathers data, updates its plan, and then executes the next set of actions until the goal criteria are met or a failure condition is hit.

🧑‍💻 3. Advanced & Multi-System Concepts

These concepts push the boundaries toward greater complexity and real-world application.

8. Multi-Agent Systems (MAS)

  • Concept: Instead of one monolithic agent, the task is divided among several specialized, collaborating agents.
  • Example:
    • Agent A (Researcher): Focuses only on data gathering.
    • Agent B (Analyst): Focuses only on interpreting the data provided by Agent A.
    • Agent C (Writer): Focuses only on synthesizing the final report based on Agents A and B's output.
  • Benefit: Allows for tackling extremely complex tasks that require multiple, distinct skill sets.

9. Goal-Function Optimization

  • Concept: Defining the ultimate metric for success. The agent doesn't just complete the steps; it completes them in the most optimal way (e.g., finding the cheapest trip, the fastest route, or the highest-rated product, based on a defined function).

10. Embodiment (Embodied AI)

  • Concept: Taking agency concepts into the physical world. An AI that doesn't just plan a sequence of steps, but controls a physical entity (a robotic arm, a drone, etc.) to execute the plan.

⚖️ 4. Safety, Control, and Ethical Concepts

As agency increases, control and safety become paramount.

11. Guardrails and Constraints

  • Concept: Explicit safety mechanisms and guardrails built around the agent to prevent it from acting dangerously, legally, or unethically.
  • Example: "Never use the delete_account() function unless explicit human approval is given."

12. Human-in-the-Loop (HITL)

  • Concept: The process of requiring human review or explicit approval at critical decision points. The agent plans and executes 90% of the way, but pauses before the final, high-impact action, asking the user: "Are you sure you want me to send this email?"

13. Explainability (XAI for Agents)

  • Concept: The ability of the agent to explain why it chose a particular plan, why it discarded an alternative, and how the observed evidence led to its current conclusion. This builds trust and facilitates debugging.

Scheduling of Pods in Kubernetes

 

🧠 1. Node Selector (Simplest)

This is the most basic way to constrain a Pod to nodes.

👉 You say: “Run this Pod only on nodes with this label.”

spec:
nodeSelector:
disktype: ssd

✔ Simple key-value match
❌ No flexibility (no OR, no soft rules)

👉 Think of it as:

“Hard filter: only these nodes allowed”


🎯 2. Node Affinity (Advanced Node Selector)

This is a more expressive and powerful version of nodeSelector.

Two types:

Required (Hard rule)

requiredDuringSchedulingIgnoredDuringExecution

Pod must go to matching node or it won’t be scheduled.

🤝 Preferred (Soft rule)

preferredDuringSchedulingIgnoredDuringExecution

Scheduler tries to match but can ignore if needed.

Example:

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd

✔ Supports operators: In, NotIn, Exists, Gt, etc.
✔ Can express complex logic

👉 Think:

“Smart filtering with flexibility”


🚫 3. Pod Affinity / Anti-Affinity

This is about Pod-to-Pod relationships, not nodes.


🤝 Pod Affinity (co-location)

👉 “Schedule this Pod near another Pod”

Example:

podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: redis
topologyKey: kubernetes.io/hostname

✔ Ensures Pods are on same node / zone


❌ Pod Anti-Affinity (separation)

👉 “Do NOT schedule this Pod near similar Pods”

Example:

podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: web
topologyKey: kubernetes.io/hostname

✔ Useful for high availability

👉 Think:

  • Affinity → “stick together”
  • Anti-affinity → “spread apart”

⚠️ 4. Taints and Tolerations (Opposite Model)

This is where many people get confused.

👉 Instead of Pods choosing nodes, nodes repel Pods


🚫 Taint (on Node)

kubectl taint nodes node1 key=value:NoSchedule

👉 Means:

“Do NOT schedule any Pods here unless they tolerate this”


✅ Toleration (on Pod)

tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"

👉 Means:

“This Pod is allowed on tainted nodes”


🎯 Effects of Taints

  • NoSchedule → don’t place new Pods
  • PreferNoSchedule → avoid if possible
  • NoExecute → evict existing Pods

👉 Think:

  • Taint = “Keep out sign 🚫”
  • Toleration = “I have permission 🎫”

⚙️ 5. Topology Manager

This is more advanced and often overlooked.

👉 It ensures optimal resource alignment on a node, especially for:

  • CPU
  • NUMA
  • GPUs

Why needed?

Modern servers have NUMA architecture:

  • Memory + CPU split into zones
  • Cross-zone access = slower

What Topology Manager does:

It coordinates:

  • CPU Manager
  • Device Manager
  • Memory Manager

👉 Goal:

Allocate resources from the same NUMA node


Policies:

  • none → no alignment
  • best-effort → try to align
  • restricted → enforce if possible
  • single-numa-node → strict alignment

👉 Think:

“Even inside a node, placement matters”


🔁 How They Fit Together

FeatureLevelPurpose
Node SelectorNodeSimple node filtering
Node AffinityNodeAdvanced node rules
Pod AffinityPod-to-PodCo-locate Pods
Pod Anti-AffinityPod-to-PodSpread Pods
TaintsNodeRepel Pods
TolerationsPodAllow exceptions
Topology ManagerInside NodeOptimize hardware locality

🔥 Real-world mental model

Imagine Kubernetes scheduling like this:

  1. Node Selector / Affinity → shortlist nodes
  2. Taints → remove forbidden nodes
  3. Pod Affinity/Anti-Affinity → decide placement relative to other Pods
  4. Topology Manager → fine-tune hardware allocation

Thursday, April 16, 2026

Dynamic Import in Javascript

 

Dynamic Imports
If you need to load a function conditionally or on-demand (e.g., inside an if block or an event listener), use the import() function. This returns a Promise.

// Using .then()
import('./myFile.js').then((module) => {
  module.myFunction();
});

// Using async/await
const module = await import('./myFile.js');
module.myFunction();

React : Difference in creating a functional component with and without "function" keyword

Syntax Comparison

Method Syntax Example
Function Declarationfunction MyComponent(props) { ... }
Const Arrow Functionconst MyComponent = (props) => { ... }

Key Differences to Consider

IMP : CONTEXT IS NOT AVAILABE IN ARROW FUNCTIONS, HENCE YOU CANNOT USE "this" KEYWORD IN ARROW FUNCTIONS.
  • Hoisting: You can use a component defined with the function keyword before it appears in your code because it is "hoisted" to the top of the scope. Components defined with const must be defined before they are used.
  • Exporting: Using the function keyword allows you to use export default on the same line as the definition (e.g., export default function MyComponent() {}), which you cannot do with const.  
  •  "export default" cannot be combined with "const" in the same statement. They must be split on separate statements.   If you still want to use arrow syntax to declare a function, drop the "const" keyword and also DROP THE NAME OF FUNCTION. It becomes anonymous export, which is not a problem for default export, because the importing module anyway can use any convenient name to import a "export default".
  • TypeScript: If you use TypeScript, it is easier to apply the React.FC type to a const variable than to a standard function declaration.
  • Debugging: In older versions of React, function declarations provided clearer names in the React DevTools compared to anonymous arrow functions, though modern tooling largely fixes this for const variables as well. 

Which should you choose?

Most modern React developers prefer const arrow functions for consistency within the component (since hooks and event handlers inside are usually arrow functions), but using the function keyword is still a perfectly valid and standard way to write components.

React : Event handlers with parameters in functional components

In React, to pass parameters to an event handler, you must wrap the function call so that it is not executed immediately during the component's render phase. 

Primary Methods to Pass Parameters
  • Arrow Function Wrapper: The most common approach is to use an inline arrow function in the JSX attribute. This creates a new function that calls your handler with the specific arguments when the event occurs.
  • Function.bind(): You can use the method to pre-configure a function with specific arguments. The first argument to is the context (usually in functional components or in class components), followed by the parameters you want to pass.
  • Currying (High-Order Function): Define a function that returns another function. This is useful for passing parameters while keeping the JSX cleaner. 
Handling the Event Object

If you need both a custom parameter and the React SyntheticEvent object (e.g., to call ), you must pass the event explicitly when using an arrow function.
  • Arrow Function:
  • Bind: (The event object is automatically passed as the last argument) 
Performance Considerations

Creating functions inline (using arrow functions or ) generates a new function instance on every render. While usually fine for small applications, for performance-critical components (like large lists), consider using the
useCallback hook from React.dev to memoize the handler.




How to use useCallback hook to pass parameters

To use the useCallback hook with a parameter, you define the parameter in the function signature within the hook. This caches the function definition, ensuring that the same function instance is used between re-renders unless its dependencies change. 

Implementation with an ID Parameter

In this pattern, the useCallback hook defines how to handle the ID, while the actual ID is passed during the execution of that memoized function. 
import { useCallback } from 'react';

const MyComponent = ({ items }) => {
  // 1. Define the memoized handler with a parameter
  const handleItemClick = useCallback((id) => {
    console.log(`Clicked item with ID: ${id}`);
  }, []); // Empty dependency array means this function reference never changes

  return (
    <ul>
      {items.map((item) => (
        <li key={item.id}>
          {/* 2. Pass the parameter when the event occurs */}
          <button onClick={() => handleItemClick(item.id)}>
            Click {item.name}
          </button>
        </li>
      ))}
    </ul>
  );
};

Key Rules for useCallback

  • Parameter Location: Parameters are defined in the function passed to useCallback. They are not passed into the dependency array unless the logic inside the function relies on an external value that changes.
  • Dependency Array: Only include reactive values (props, state, or variables) that the function uses internally. If you don't use any external variables, use an empty array [].
  • Wrapper Still Required: In your JSX, you still need an inline arrow function (e.g., () => handleItemClick(id)) to actually call your memoized function with the specific ID. 

When is this actually useful?

Using useCallback with parameters is most effective when:
  • Passing to React.memo Children: If you pass this handler to a child component wrapped in React.memo, a stable function reference prevents the child from re-rendering unnecessarily.
  • Use in useEffect: If another useEffect or hook depends on your handler, memoizing it prevents that effect from running on every render. 






What about class components in React now ?

React still supports class-based components, but they are no longer recommended for new code. 


The current status of class components is as follows:
  • Supported, not Preferred: React has no immediate plans to remove class components, ensuring that legacy applications continue to function.
  • "Legacy" Status: Official documentation now categorizes class components under
    Legacy React APIs

    , emphasizing that function components with Hooks are the modern standard.
  • Missing Features: New React features, such as Server Components and certain optimizations, are designed specifically for function components and may not be available for classes.
  • Mandatory Use Case (Error Boundaries): Currently, you still must use a class component to create an
    Error Boundary

    , as there is no Hook equivalent for the or lifecycle methods yet.
  • Coexistence: You can freely mix class and function components within the same project. 
Why the Shift?

The React team transitioned to function components because classes were found to be more difficult for both humans and machines to optimize. Function components with Hooks allow for:
  • Better logic reuse without complex patterns like Higher-Order Components.
  • Cleaner code by eliminating the need for binding and constructors.
  • Improved performance through more efficient minification and "tree-shaking" during the build process. 

DOCKER ARG instruction as opposed to ENV instruction

  In Docker, ARG and ENV are used to define environment variables. The ARG instruction defines variables that users can pass to the builder ...