Search
Calendar
May 2025
S M T W T F S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031
Archives

Posts Tagged ‘NodeJS’

PostHeaderIcon ️ Prototype Pollution: The Silent JavaScript Vulnerability You Shouldn’t Ignore

Prototype pollution is one of those vulnerabilities that many developers have heard about, but few fully understand—or guard against. It’s sneaky, dangerous, and more common than you’d think, especially in JavaScript and Node.js applications.

This post breaks down what prototype pollution is, how it can be exploited, how to detect it, and most importantly, how to fix it.


What Is Prototype Pollution?

In JavaScript, all objects inherit from Object.prototype by default. If an attacker can modify that prototype via user input, they can change how every object behaves.

This is called prototype pollution, and it can:

  • Alter default behavior of native objects
  • Lead to privilege escalation
  • Break app logic in subtle ways
  • Enable denial-of-service (DoS) or even remote code execution in some cases

Real-World Exploit Example

const payload = JSON.parse('{ "__proto__": { "isAdmin": true } }');
Object.assign({}, payload);

console.log({}.isAdmin); // → true 

Now, any object in your app believes it’s an admin. That’s the essence of prototype pollution.


How to Detect It

✅ Static Code Analysis

  • ESLint
    • Use plugins like eslint-plugin-security or eslint-plugin-no-prototype-builtins
  • Semgrep
    • Detect unsafe merges with custom rules

Dependency Scanning

  • npm audit, yarn audit, or tools like Snyk, OWASP Dependency-Check
  • Many past CVEs (e.g., Lodash < 4.17.12) were related to prototype pollution

Manual Testing

Try injecting:

{ "__proto__": { "injected": true } }

Then check if unexpected object properties appear in your app.


️ How to Fix It

1. Sanitize Inputs

Never allow user input to include dangerous keys:

  • __proto__
  • constructor
  • prototype

2. Avoid Deep Merge with Untrusted Data

Use libraries that enforce safe merges:

  • deepmerge with safe mode
  • Lodash >= 4.17.12

3. Write Safe Merge Logic

function safeMerge(target, source) {
  for (let key in source) {
    if (!['__proto__', 'constructor', 'prototype'].includes(key)) {
      target[key] = source[key];
    }
  }
  return target;
}

4. Use Secure Parsers

  • secure-json-parse
  • @hapi/hoek

TL;DR

✅ Task Tool/Approach
Scan source code ESLint, Semgrep
Test known payloads Manual JSON fuzzing
Scan dependencies npm audit, Snyk
Sanitize keys before merging Allowlist strategy
Patch libraries Update Lodash, jQuery

‍ Final Thoughts

Prototype pollution isn’t just a theoretical risk. It has appeared in real-world vulnerabilities in major libraries and frameworks.

If your app uses JavaScript—on the frontend or backend—you need to be aware of it.

Share this post if you work with JavaScript.
️ Found something similar in your project? Let’s talk.

#JavaScript #Security #PrototypePollution #NodeJS #WebSecurity #DevSecOps #SoftwareEngineering

PostHeaderIcon Demystifying Parquet: The Power of Efficient Data Storage in the Cloud

Unlocking the Power of Apache Parquet: A Modern Standard for Data Efficiency

In today’s digital ecosystem, where data volume, velocity, and variety continue to rise, the choice of file format can dramatically impact performance, scalability, and cost. Whether you are an architect designing a cloud-native data platform or a developer managing analytics pipelines, Apache Parquet stands out as a foundational technology you should understand — and probably already rely on.

This article explores what Parquet is, why it matters, and how to work with it in practice — including real examples in Python, Java, Node.js, and Bash for converting and uploading files to Amazon S3.

What Is Apache Parquet?

Apache Parquet is a high-performance, open-source file format designed for efficient columnar data storage. Originally developed by Twitter and Cloudera and now an Apache Software Foundation project, Parquet is purpose-built for use with distributed data processing frameworks like Apache Spark, Hive, Impala, and Drill.

Unlike row-based formats such as CSV or JSON, Parquet organizes data by columns rather than rows. This enables powerful compression, faster retrieval of selected fields, and dramatic performance improvements for analytical queries.

Why Choose Parquet?

✅ Columnar Format = Faster Queries

Because Parquet stores values from the same column together, analytical engines can skip irrelevant data and process only what’s required — reducing I/O and boosting speed.

Compression and Storage Efficiency

Parquet achieves better compression ratios than row-based formats, thanks to the similarity of values in each column. This translates directly into reduced cloud storage costs.

Schema Evolution

Parquet supports schema evolution, enabling your datasets to grow gracefully. New fields can be added over time without breaking existing consumers.

Interoperability

The format is compatible across multiple ecosystems and languages, including Python (Pandas, PyArrow), Java (Spark, Hadoop), and even browser-based analytics tools.

☁️ Using Parquet with Amazon S3

One of the most common modern use cases for Parquet is in conjunction with Amazon S3, where it powers data lakes, ETL pipelines, and serverless analytics via services like Amazon Athena and Redshift Spectrum.

Here’s how you can write Parquet files and upload them to S3 in different environments:

From CSV to Parquet in Practice

Python Example

import pandas as pd

# Load CSV data
df = pd.read_csv("input.csv")

# Save as Parquet
df.to_parquet("output.parquet", engine="pyarrow")

To upload to S3:

import boto3

s3 = boto3.client("s3")
s3.upload_file("output.parquet", "your-bucket", "data/output.parquet")

Node.js Example

Install the required libraries:

npm install aws-sdk

Upload file to S3:

const AWS = require('aws-sdk');
const fs = require('fs');

const s3 = new AWS.S3();
const fileContent = fs.readFileSync('output.parquet');

const params = {
    Bucket: 'your-bucket',
    Key: 'data/output.parquet',
    Body: fileContent
};

s3.upload(params, (err, data) => {
    if (err) throw err;
    console.log(`File uploaded successfully at ${data.Location}`);
});

☕ Java with Apache Spark and AWS SDK

In your pom.xml, include:

<dependency>
    <groupId>org.apache.parquet</groupId>
    <artifactId>parquet-hadoop</artifactId>
    <version>1.12.2</version>
</dependency>
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-s3</artifactId>
    <version>1.12.470</version>
</dependency>

Spark conversion:

Dataset<Row> df = spark.read().option("header", "true").csv("input.csv");
df.write().parquet("output.parquet");

Upload to S3:

AmazonS3 s3 = AmazonS3ClientBuilder.standard()
    .withRegion("us-west-2")
    .withCredentials(new AWSStaticCredentialsProvider(
        new BasicAWSCredentials("ACCESS_KEY", "SECRET_KEY")))
    .build();

s3.putObject("your-bucket", "data/output.parquet", new File("output.parquet"));

Bash with AWS CLI

aws s3 cp output.parquet s3://your-bucket/data/output.parquet

Final Thoughts

Apache Parquet has quietly become a cornerstone of the modern data stack. It powers everything from ad hoc analytics to petabyte-scale data lakes, bringing consistency and efficiency to how we store and retrieve data.

Whether you are migrating legacy pipelines, designing new AI workloads, or simply optimizing your storage bills — understanding and adopting Parquet can unlock meaningful benefits.

When used in combination with cloud platforms like AWS, the performance, scalability, and cost-efficiency of Parquet-based workflows are hard to beat.


PostHeaderIcon Advanced Encoding in Java, Kotlin, Node.js, and Python

Encoding is essential for handling text, binary data, and secure transmission across applications. Understanding advanced encoding techniques can help prevent data corruption and ensure smooth interoperability across systems. This post explores key encoding challenges and how Java/Kotlin, Node.js, and Python tackle them.


1️⃣ Handling Special Unicode Characters (Emoji, Accents, RTL Text)

Java/Kotlin

Java uses UTF-16 internally, but for external data (JSON, databases, APIs), explicit encoding is required:

String text = "🔧 Café مرحبا";
byte[] utf8Bytes = text.getBytes(StandardCharsets.UTF_8);
String decoded = new String(utf8Bytes, StandardCharsets.UTF_8);
System.out.println(decoded); // 🔧 Café مرحبا

Tip: Always specify StandardCharsets.UTF_8 to avoid platform-dependent defaults.

Node.js

const text = "🔧 Café مرحبا";
const utf8Buffer = Buffer.from(text, 'utf8');
const decoded = utf8Buffer.toString('utf8');
console.log(decoded); // 🔧 Café مرحبا

Tip: Using an incorrect encoding (e.g., latin1) may corrupt characters.

Python

text = "🔧 Café مرحبا"
utf8_bytes = text.encode("utf-8")
decoded = utf8_bytes.decode("utf-8")
print(decoded)  # 🔧 Café مرحبا

Tip: Python 3 handles Unicode by default, but explicit encoding is always recommended.


2️⃣ Encoding Binary Data for Transmission (Base64, Hex, Binary Files)

Java/Kotlin

byte[] data = "Hello World".getBytes(StandardCharsets.UTF_8);
String base64Encoded = Base64.getEncoder().encodeToString(data);
byte[] decoded = Base64.getDecoder().decode(base64Encoded);
System.out.println(new String(decoded, StandardCharsets.UTF_8)); // Hello World

Node.js

const data = Buffer.from("Hello World", 'utf8');
const base64Encoded = data.toString('base64');
const decoded = Buffer.from(base64Encoded, 'base64').toString('utf8');
console.log(decoded); // Hello World

Python

import base64
data = "Hello World".encode("utf-8")
base64_encoded = base64.b64encode(data).decode("utf-8")
decoded = base64.b64decode(base64_encoded).decode("utf-8")
print(decoded)  # Hello World

Tip: Base64 encoding increases data size (~33% overhead), which can be a concern for large files.


3️⃣ Charset Mismatches and Cross-Language Encoding Issues

A file encoded in ISO-8859-1 (Latin-1) may cause garbled text when read using UTF-8.

Java/Kotlin Solution:

byte[] bytes = Files.readAllBytes(Paths.get("file.txt"));
String text = new String(bytes, StandardCharsets.ISO_8859_1);

Node.js Solution:

const fs = require('fs');
const text = fs.readFileSync("file.txt", { encoding: "latin1" });

Python Solution:

with open("file.txt", "r", encoding="ISO-8859-1") as f:
    text = f.read()

Tip: Always specify encoding explicitly when working with external files.


4️⃣ URL Encoding and Decoding

Java/Kotlin

String encoded = URLEncoder.encode("Hello World!", StandardCharsets.UTF_8);
String decoded = URLDecoder.decode(encoded, StandardCharsets.UTF_8);

Node.js

const encoded = encodeURIComponent("Hello World!");
const decoded = decodeURIComponent(encoded);

Python

from urllib.parse import quote, unquote
encoded = quote("Hello World!")
decoded = unquote(encoded)

Tip: Use UTF-8 for URL encoding to prevent inconsistencies across different platforms.


Conclusion: Choosing the Right Approach

  • Java/Kotlin: Strong type safety, but requires careful Charset management.
  • Node.js: Web-friendly but depends heavily on Buffer conversions.
  • Python: Simple and concise, though strict type conversions must be managed.

📌 Pro Tip: Always be explicit about encoding when handling external data (APIs, files, databases) to avoid corruption.

 

PostHeaderIcon Understanding Dependency Management and Resolution: A Look at Java, Python, and Node.js

Understanding Dependency Management and Resolution: A Look at Java, Python, and Node.js

Mastering how dependencies are handled can define your project’s success or failure. Let’s explore the nuances across today’s major development ecosystems.

Introduction

Every modern application relies heavily on external libraries. These libraries accelerate development, improve security, and enable integration with third-party services. However, unmanaged dependencies can lead to catastrophic issues — from version conflicts to severe security vulnerabilities. That’s why understanding dependency management and resolution is absolutely essential, particularly across different programming ecosystems.

What is Dependency Management?

Dependency management involves declaring external components your project needs, installing them properly, ensuring their correct versions, and resolving conflicts when multiple components depend on different versions of the same library. It also includes updating libraries responsibly and securely over time. In short, good dependency management prevents issues like broken builds, “dependency hell”, or serious security holes.

Java: Maven and Gradle

In the Java ecosystem, dependency management is an integrated and structured part of the build lifecycle, using tools like Maven and Gradle.

Maven and Dependency Scopes

Maven uses a declarative pom.xml file to list dependencies. A particularly important notion in Maven is the dependency scope.

Scopes control where and how dependencies are used. Examples include:

  • compile (default): Needed at both compile time and runtime.
  • provided: Needed for compile, but provided at runtime by the environment (e.g., Servlet API in a container).
  • runtime: Needed only at runtime, not at compile time.
  • test: Used exclusively for testing (JUnit, Mockito, etc.).
  • system: Provided by the system explicitly (deprecated practice).

<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.13.2</version>
  <scope>test</scope>
</dependency>
    

This nuanced control allows Java developers to avoid bloating production artifacts with unnecessary libraries, and to fine-tune build behaviors. This is a major feature missing from simpler systems like pip or npm.

Gradle

Gradle, offering both Groovy and Kotlin DSLs, also supports scopes through configurations like implementation, runtimeOnly, testImplementation, which have similar meanings to Maven scopes but are even more flexible.


dependencies {
    implementation 'org.springframework.boot:spring-boot-starter'
    testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
    

Python: pip and Poetry

Python dependency management is simpler, but also less structured compared to Java. With pip, there is no formal concept of scopes.

pip

Developers typically separate main dependencies and development dependencies manually using different files:

  • requirements.txt – Main project dependencies.
  • requirements-dev.txt – Development and test dependencies (pytest, tox, etc.).

This manual split is prone to human error and lacks the rigorous environment control that Maven or Gradle enforce.

Poetry

Poetry improves the situation by introducing a structured division:


[tool.poetry.dependencies]
requests = "^2.31"

[tool.poetry.dev-dependencies]
pytest = "^7.1"
    

Poetry brings concepts closer to Maven scopes, but they are still less fine-grained (no runtime/compile distinction, for instance).

Node.js: npm and Yarn

JavaScript dependency managers like npm and yarn allow a simple distinction between regular and development dependencies.

npm

Dependencies are declared in package.json under different sections:

  • dependencies – Needed in production.
  • devDependencies – Needed only for development (e.g., testing libraries, linters).

{
  "dependencies": {
    "express": "^4.18.2"
  },
  "devDependencies": {
    "mocha": "^10.2.0"
  }
}
    

While convenient, npm’s dependency management lacks Maven’s level of strictness around dependency resolution, often leading to version mismatches or “node_modules bloat.”

Key Differences Between Ecosystems

When switching between Java, Python, and Node.js environments, developers must be aware of the following fundamental differences:

1. Formality of Scopes

Java’s Maven/Gradle ecosystem defines scopes formally at the dependency level. Python (pip) and JavaScript (npm) ecosystems use looser, file- or section-based categorization.

2. Handling of Transitive Dependencies

Maven and Gradle resolve and include transitive dependencies automatically with sophisticated conflict resolution strategies (e.g., nearest version wins). pip historically had weak transitive dependency handling, leading to issues unless careful pinning is done. npm introduced better nested module flattening with npm v7+ but conflicts still occur in complex trees.

3. Lockfiles

npm/yarn and Python Poetry use lockfiles (package-lock.json, yarn.lock, poetry.lock) to ensure consistent dependency installations across machines. Maven and Gradle historically did not need lockfiles because they strictly followed declared versions and scopes. However, Gradle introduced lockfile support with dependency locking in newer versions.

4. Dependency Updating Strategy

Java developers often manually manage dependency versions inside pom.xml or use dependencyManagement blocks for centralized control. pip requires updating requirements.txt or regenerating them via pip freeze. npm/yarn allows semver rules (“^”, “~”) but auto-updating can lead to subtle breakages if not careful.

Best Practices Across All Languages

  • Pin exact versions wherever possible to avoid surprise updates.
  • Use lockfiles and commit them to version control (Git).
  • Separate production and development/test dependencies explicitly.
  • Use dependency scanners (e.g., OWASP Dependency-Check, Snyk, npm audit) regularly to detect vulnerabilities.
  • Prefer stable, maintained libraries with good community support and recent commits.

Conclusion

Dependency management, while often overlooked early in projects, becomes critical as applications scale. Maven and Gradle offer the most fine-grained controls via dependency scopes and conflict resolution. Python and JavaScript ecosystems are evolving rapidly, but require developers to be much more careful manually. Understanding these differences, and applying best practices accordingly, will ensure smoother builds, faster delivery, and safer production systems.

Interested in deeper dives into dependency vulnerability scanning, SBOM generation, or automatic dependency update pipelines? Subscribe to our blog for more in-depth content!