Network Statistics (netstats) Log Documentation

Learning to Extract Network Statistics Logs

Introduction

The Network Statistics service (netstats) tracks all network data usage on an Android device, providing detailed information about which apps transmitted data, when, and how much. This service maintains historical records of network activity across WiFi, mobile data, and other interfaces, making it crucial for investigating data exfiltration, communication patterns, and unauthorized network access.

Prerequisites

  • Android device with USB debugging enabled
  • ADB (Android Debug Bridge) installed
  • Understanding of network interfaces (wlan0, rmnet0, etc.)
  • Basic knowledge of UID structure in Android

Step 1: Understanding Network Interfaces

First, identify the network interfaces on the device:

bash
# List network interfaces
adb shell ip addr show

# Common interfaces:
# lo - Loopback (127.0.0.1)
# wlan0 - WiFi interface
# rmnet0 - Mobile data interface (may vary by device)
# tun0 - VPN tunnel interface
# p2p0 - WiFi Direct interface

# Check current network status

adb shell dumpsys connectivity | grep "Active network"

Step 2: Basic Network Statistics Extraction

Create working directory and extract network data:

bash
# Create evidence directory
mkdir netstats_evidence
cd netstats_evidence

# Extract basic network statistics
adb shell dumpsys netstats > netstats_basic.txt

# Extract detailed statistics with UID information
adb shell dumpsys netstats detail > netstats_detail.txt

# Extract statistics for specific time period (last 24 hours)
adb shell dumpsys netstats --hours 24 > netstats_24h.txt

# Extract full historical data
adb shell dumpsys netstats --full > netstats_full.txt

# Machine-readable format

adb shell dumpsys netstats --checkin > netstats_checkin.csv

Step 3: Understanding the Output Structure

Network statistics are organized by interface and UID:

Active interfaces:
iface=wlan0 ident=[{type=WIFI, subType=COMBINED, networkId="HomeNetwork"}]

Dev stats:
Pending bytes: 1798112
History since boot:
ident=[{type=WIFI, subType=COMBINED, networkId="HomeNetwork"}] uid=-1 set=ALL tag=0x0
NetworkStatsHistory: bucketDuration=3600000
st=1705276800000 rb=125698745 rp=98654 tb=45987123 tp=65432 op=0

UID stats:
ident=[{type=WIFI, subType=COMBINED}] uid=10142 set=DEFAULT tag=0x0
NetworkStatsHistory: bucketDuration=3600000

  st=1705276800000 rb=8975432 rp=7654 tb=1234567 tp=3210 op=0

Step 4: First Analysis Exercise

Let's identify top data consuming apps:

bash
# Extract and sort by data usage
grep -A1 "uid=10" netstats_detail.txt | \
grep -E "uid=|rb=|tb=" | \
awk '
/uid=/ {
current_uid \= $0
gsub(/.*uid=/, "", current_uid)
gsub(/ .*/, "", current_uid)
}
/rb=/ {
if (current_uid) {
rx \= $0
gsub(/.*rb=/, "", rx)
gsub(/ .*/, "", rx)
total_rx[current_uid] += rx
}
}
/tb=/ {
if (current_uid) {
tx \= $0
gsub(/.*tb=/, "", tx)
gsub(/ .*/, "", tx)
total_tx[current_uid] += tx
}
}
END {
print "UID\tDownload(MB)\tUpload(MB)\tTotal(MB)"
for (uid in total_rx) {
rx_mb \= total_rx[uid] / 1048576
tx_mb \= total_tx[uid] / 1048576
total_mb \= rx_mb + tx_mb
printf "%s\t%.2f\t\t%.2f\t\t%.2f\n", uid, rx_mb, tx_mb, total_mb
}
}

' | sort -k4 -nr | head -20

Understanding Key Fields

  1. Network Identity
  2. type - MOBILE, WIFI, BLUETOOTH, ETHERNET
  3. subType - Network generation (3G, 4G, 5G)
  4. networkId - WiFi SSID or carrier info
  5. Statistics Fields
  6. st - Start time (epoch milliseconds)
  7. rb - Received bytes (download)
  8. rp - Received packets
  9. tb - Transmitted bytes (upload)
  10. tp - Transmitted packets
  11. op - Operations count
  12. UID Categories
  13. -1 - Total for interface
  14. 0-9999 - System UIDs
  15. 10000+ - User app UIDs

Working with Network Statistics Logs

How to Extract Specific Timeframes

Filter by Date Range

bash
# Function to convert date to epoch milliseconds
date_to_epoch_ms() {
date -d "$1" +%s000
}

# Extract network usage for specific date range
START_DATE=$(date_to_epoch_ms "2024-01-15 00:00:00")
END_DATE=$(date_to_epoch_ms "2024-01-15 23:59:59")

# Filter statistics within range
awk -v start="$START_DATE" -v end="$END_DATE" '
/st=/ {
time \= $0
gsub(/.*st=/, "", time)
gsub(/ .*/, "", time)

if (time \>= start && time \<= end) {  
  in\_range \= 1  
  print "Time:", strftime("%Y-%m-%d %H:%M:%S", time/1000)  
  print $0  
} else {  
  in\_range \= 0  
}

}

in_range && /rb=|tb=/ {
print $0
}

' netstats_detail.txt > network_usage_20240115.txt

Extract Hourly Network Patterns

bash
# Create hourly usage profile
awk '
/st=/ {
time \= $0
gsub(/.*st=/, "", time)
gsub(/ .*/, "", time)
hour \= strftime("%H", time/1000)
}

/rb=/ && hour {
bytes \= $0
gsub(/.*rb=/, "", bytes)
gsub(/ .*/, "", bytes)
hourly_rx[hour] += bytes
}

/tb=/ && hour {
bytes \= $0
gsub(/.*tb=/, "", bytes)
gsub(/ .*/, "", bytes)
hourly_tx[hour] += bytes
}

END {
print "Hour\tDownload(MB)\tUpload(MB)"
for (h=0; h\<24; h++) {
rx_mb \= hourly_rx[sprintf("%02d", h)] / 1048576
tx_mb \= hourly_tx[sprintf("%02d", h)] / 1048576
printf "%02d:00\t%.2f\t\t%.2f\n", h, rx_mb, tx_mb
}
}

' netstats_detail.txt > hourly_network_pattern.txt

How to Filter for Specific Events

Identify Data Exfiltration Attempts

bash
# Find unusual upload spikes (uploads > 10MB in one hour)
THRESHOLD_BYTES=$((10 * 1024 * 1024)) # 10MB

awk -v threshold="$THRESHOLD_BYTES" '
/uid=10[0-9]/ {
current_uid \= $0
gsub(/.*uid=/, "", current_uid)
gsub(/ .*/, "", current_uid)
}

/st=/ {
current_time \= $0
gsub(/.*st=/, "", current_time)
gsub(/ .*/, "", current_time)
}

/tb=/ && current_uid && current_time {
tx_bytes \= $0
gsub(/.*tb=/, "", tx_bytes)
gsub(/ .*/, "", tx_bytes)

if (tx\_bytes \> threshold) {  
  print "ALERT: Large upload detected"  
  print "  UID:", current\_uid  
  print "  Time:", strftime("%Y-%m-%d %H:%M:%S", current\_time/1000)  
  print "  Upload size:", tx\_bytes / 1048576, "MB"  
  print "---"  
}

}

' netstats_detail.txt > data_exfiltration_alerts.txt

Track Specific App Network Usage

bash
# First, find UID for target app
TARGET_APP="com.whatsapp"
TARGET_UID=$(adb shell dumpsys package $TARGET_APP | grep userId= | head -1 | sed 's/.*userId=//')

echo "Tracking network usage for $TARGET_APP (UID: $TARGET_UID)"

# Extract all network activity for this UID
grep -A5 "uid=$TARGET_UID" netstats_detail.txt > ${TARGET_APP}_network.txt

# Parse and summarize
awk -v app="$TARGET_APP" '
/type=MOBILE/ {current_type \= "Mobile"}
/type=WIFI/ {current_type \= "WiFi"}

/rb=/ {
rx \= $0
gsub(/.*rb=/, "", rx)
gsub(/ .*/, "", rx)
rx_total[current_type] += rx
}

/tb=/ {
tx \= $0
gsub(/.*tb=/, "", tx)
gsub(/ .*/, "", tx)
tx_total[current_type] += tx
}

END {
print "Network usage for", app ":"
for (type in rx_total) {
print type ":"
print " Downloaded:", rx_total[type] / 1048576, "MB"
print " Uploaded:", tx_total[type] / 1048576, "MB"
print " Total:", (rx_total[type] + tx_total[type]) / 1048576, "MB"
}
}

' ${TARGET_APP}_network.txt

How to Correlate with Other Logs

Correlate Network Activity with App Usage

bash
# Extract network spike times
awk '
/tb=/ {
tx \= $0
gsub(/.*tb=/, "", tx)
gsub(/ .*/, "", tx)

if (tx \> 5242880\) {  \# 5MB threshold  
  if (match($0, /st=(\[0-9\]+)/, time)) {  
    print time\[1\], tx  
  }  
}

}
' netstats_detail.txt > network_spikes.txt

# Check app activity during network spikes
while read spike_time spike_size; do
echo "Network spike at $(date -d @$((spike_time/1000))): $((spike_size/1048576))MB"

# Find active apps at that time from usage stats
WINDOW_START=$((spike_time - 300000)) # 5 minutes before
WINDOW_END=$((spike_time + 300000)) # 5 minutes after

adb shell dumpsys usagestats | \
awk -v start="$WINDOW_START" -v end="$WINDOW_END" '
/lastTimeActive=/ {
if (match($0, /lastTimeActive=([0-9]+)/, t)) {
if (t[1] >= start && t[1] \<= end) {
print " Active app:", $0
}
}
}
'

done \< network_spikes.txt

bash
# Find correlation between wake locks and network usage
# Extract wake lock periods from battery stats
adb shell dumpsys batterystats --history | \
grep -E "\+wake_lock|\-wake_lock" > wake_locks.txt

# Extract network active periods
awk '
/rb=[0-9]+/ || /tb=[0-9]+/ {
if (match($0, /st=([0-9]+)/, t)) {
time \= t[1]
# Check if any data transferred
if (match($0, /rb=([0-9]+)/, rx) && rx[1] > 0) {
print time, "RX", rx[1]
}
if (match($0, /tb=([0-9]+)/, tx) && tx[1] > 0) {
print time, "TX", tx[1]
}
}
}
' netstats_detail.txt > network_active_times.txt

# Correlate
echo "Checking for network activity during wake locks..."

# (Implementation would compare timestamps)

How to Identify Anomalies

Detect Suspicious Network Patterns

bash
# 1. Night time network activity (12 AM - 5 AM)
awk '
BEGIN {
night_start \= 0
night_end \= 5
}

/st=/ {
if (match($0, /st=([0-9]+)/, t)) {
hour \= strftime("%H", t[1]/1000)
if (hour >= night_start && hour \< night_end) {
night_time \= 1
} else {
night_time \= 0
}
}
}

night_time && /tb=/ {
if (match($0, /tb=([0-9]+)/, tx) && tx[1] > 1048576) { # > 1MB
print "SUSPICIOUS: Night upload at", strftime("%Y-%m-%d %H:%M:%S", t[1]/1000)
print " Size:", tx[1] / 1048576, "MB"
if (current_uid) print " UID:", current_uid
}
}

/uid=/ {
match($0, /uid=([0-9]+)/, u)
current_uid \= u[1]
}

' netstats_detail.txt > night_network_activity.txt

Identify Hidden App Communication

bash
# Find apps with network activity but no UI activity
# Get UIDs with network usage
awk '/uid=10[0-9]+/ {
match($0, /uid=([0-9]+)/, u)
uid \= u[1]
}
/rb=|tb=/ && uid {
if (match($0, /(rb|tb)=([0-9]+)/, bytes) && bytes[2] > 0) {
network_uids[uid] \= 1
}
}
END {
for (uid in network_uids) {
print uid
}
}
' netstats_detail.txt > network_active_uids.txt

# Check which have no screen time
while read uid; do
# Convert UID to package name
PACKAGE=$(adb shell cmd package list packages --uid $uid | awk '{print $1}' | sed 's/package://')

if [ ! -z "$PACKAGE" ]; then
# Check if package has usage stats
if ! adb shell dumpsys usagestats | grep -q "package=$PACKAGE.*totalTimeInForeground=[1-9]"; then
echo "Hidden network activity: $PACKAGE (UID: $uid)"
fi
fi

done \< network_active_uids.txt

Network Statistics Log Structure

Complete Field Definitions

Network Identity Structure

ident=[{type=TYPE, subType=SUBTYPE, networkId="ID", metered=BOOL, defaultNetwork=BOOL}]

Fields:
type: MOBILE|WIFI|BLUETOOTH|ETHERNET|VPN|MOBILE_SUPL
subType: For MOBILE: COMBINED|1xRTT|CDMA|EDGE|GPRS|EVDO_0|EVDO_A|EVDO_B|HSPA|HSDPA|HSUPA|IDEN|LTE|EHRPD|HSPAP|GSM|TD_SCDMA|IWLAN|NR
For WIFI: COMBINED
networkId: String identifier (SSID for WiFi, carrier for mobile)
metered: true|false (is connection metered/limited)

defaultNetwork: true|false (is this the default route)

Statistics Entry Structure

NetworkStatsHistory: bucketDuration=3600000
st=1705276800000 rb=125698745 rp=98654 tb=45987123 tp=65432 op=15

Fields:
bucketDuration: Time period in milliseconds (3600000 \= 1 hour)
st: Start time (epoch milliseconds)
rb: Received bytes (downloaded)
rp: Received packets
tb: Transmitted bytes (uploaded)
tp: Transmitted packets

op: Operations (socket operations count)

UID Statistics Structure

UID stats:
ident=[...] uid=10142 set=DEFAULT tag=0x0
ident=[...] uid=10142 set=FOREGROUND tag=0x0
ident=[...] uid=10142 set=BACKGROUND tag=0x0

Sets:
ALL: Combined foreground and background
DEFAULT: Legacy category (usually equals ALL)
FOREGROUND: Traffic while app in foreground
BACKGROUND: Traffic while app in background

Special UIDs:
-1: Total for interface
0: Root/System
1000-1999: System services

10000+: User applications (10000 + app_id)

Interface Mapping

Common Interface Names:
lo: Loopback (127.0.0.1)
wlan0: Primary WiFi interface
p2p0: WiFi Direct
rmnet0: Mobile data (Qualcomm)
ccmni0: Mobile data (MediaTek)
tun0: VPN tunnel
bt-pan: Bluetooth tethering

rndis0: USB tethering

Data Types and Formats

Field Type Format Example
Timestamp (st) Long Epoch milliseconds 1705276800000
Bytes (rb/tb) Long Bytes 125698745
Packets (rp/tp) Long Count 98654
Duration Long Milliseconds 3600000
UID Integer User ID 10142
Tag Hex Socket tag 0x0
Operations Integer Count 15

Retention Periods

Data Type Retention Notes
Detailed stats 90 days Full byte-level data
Summary stats 90 days Aggregated by day
Dev stats Since boot Interface totals
UID stats 90 days Per-app data
Tag stats 90 days Tagged socket data

Sample Outputs with Annotations

Normal App Usage Pattern

# WhatsApp normal usage
ident=[{type=WIFI, subType=COMBINED, networkId="HomeWiFi"}] uid=10142 set=FOREGROUND tag=0x0
NetworkStatsHistory: bucketDuration=3600000
st=1705276800000 rb=15234567 rp=12453 tb=8956234 tp=9832 op=453
# Time: 2024-01-15 08:00-09:00
# Downloaded: 14.5 MB (messages, media)
# Uploaded: 8.5 MB (messages, photos)
# Packets ratio normal for messaging app

st=1705280400000 rb=8543210 rp=8765 tb=4532109 tp=5432 op=234  
\# Time: 2024-01-15 09:00-10:00  
\# Downloaded: 8.1 MB  
\# Uploaded: 4.3 MB

\# Consistent with active messaging

Suspicious Data Exfiltration Pattern

# Unknown app with suspicious pattern
ident=[{type=MOBILE, subType=LTE}] uid=10289 set=BACKGROUND tag=0x0
NetworkStatsHistory: bucketDuration=3600000
st=1705312800000 rb=543210 rp=432 tb=156789012 tp=23456 op=12
# Time: 2024-01-15 03:00-04:00 (3 AM!)
# Downloaded: 0.5 MB
# Uploaded: 149.5 MB (HUGE!)
# Background activity while user sleeping
# Very asymmetric traffic (uploading >> downloading)

\# RED FLAG: Possible data exfiltration

VPN Usage Pattern

# VPN tunnel interface
ident=[{type=VPN, subType=COMBINED, networkId=""}] uid=0 set=ALL tag=0x0
NetworkStatsHistory: bucketDuration=3600000
st=1705294800000 rb=89234567 rp=76543 tb=34567890 tp=45678 op=0
# All traffic routed through VPN
# Note: Individual app attribution may be masked

# Underlying mobile connection
ident=[{type=MOBILE, subType=LTE}] uid=10135 set=DEFAULT tag=0x0

# VPN app's own traffic (tunnel overhead)

Quick Reference Commands

bash
# Essential Network Statistics Commands
adb shell dumpsys netstats # Basic statistics
adb shell dumpsys netstats detail # Detailed with UIDs
adb shell dumpsys netstats --full # Complete history
adb shell dumpsys netstats --hours 24 # Last 24 hours
adb shell dumpsys netstats --checkin # CSV format
adb shell dumpsys netstats --uid # Group by UID

# Analysis Commands
# Top data users
adb shell dumpsys netstats | grep -A5 "uid=10" | grep -E "uid=|rb=|tb="

# Mobile data only
adb shell dumpsys netstats | grep -A10 "type=MOBILE"

# WiFi networks used
adb shell dumpsys netstats | grep "networkId=" | sort -u

# Background data usage
adb shell dumpsys netstats | grep -A5 "set=BACKGROUND"

# Large transfers (>10MB)
adb shell dumpsys netstats | awk '/rb=|tb=/ {if ($NF > 10485760) print}'

# Data usage by hour

adb shell dumpsys netstats | grep "st=" | awk '{print strftime("%H", $NF/1000)}' | sort | uniq -c

Understanding Network Statistics Logs

Why Network Statistics Logs Exist

Android tracks network statistics for several critical purposes:

  1. User Features
  2. Data usage warnings and limits
  3. Per-app data usage display
  4. Background data restrictions
  5. Data saver mode
  6. System Management
  7. Network performance optimization
  8. Bandwidth allocation
  9. Connection quality monitoring
  10. Billing and metering
  11. Security and Policy
  12. Firewall rules enforcement
  13. VPN traffic accounting
  14. Tethering detection
  15. Enterprise policy compliance

How Android Generates Network Data

Data Collection Architecture

Network Traffic

Kernel (eBPF/iptables)

├── Per-UID accounting
├── Per-interface tracking
└── Socket tagging

NetworkStatsService

├── RAM buffer (active)
├── Periodic snapshots
└── Historical database

/data/system/netstats/

Traffic Attribution Process

  1. UID Mapping: Every socket is tagged with the creating app's UID
  2. Interface Tracking: Bytes counted per network interface
  3. Foreground/Background: State tracked when bytes transferred
  4. Persistence: Hourly snapshots saved to disk
  5. Aggregation: Historical data compressed over time

Storage Structure

/data/system/netstats/
├── netstats_uid.bin # Per-app statistics
├── netstats_dev.bin # Per-interface statistics
├── netstats_xt.bin # Detailed statistics

└── netstats.bin # Summary data

Forensic Significance

Critical Evidence Types

  1. Communication Patterns
  2. When apps communicated
  3. Data volume patterns
  4. Upload/download ratios
  5. Network type preferences
  6. Data Exfiltration Detection
  7. Large uploads during odd hours
  8. Asymmetric traffic patterns
  9. Hidden app communications
  10. Unusual destination detection
  11. User Behavior Analysis
  12. Active usage periods
  13. Location inference (WiFi SSIDs)
  14. App preferences
  15. Data consumption habits
  16. Timeline Correlation
  17. Network activity during incidents
  18. Communication before/after events
  19. Background activity patterns
  20. Connection to specific networks

Unique Investigative Value

Investigation Type Network Stats Evidence
Data Theft Large uploads, timing, destination apps
Communication Analysis Messaging app data volumes and times
Location Intelligence WiFi SSID history, mobile tower changes
Malware Detection Background data, C2 communication patterns
Usage Patterns Peak activity times, data consumption
Corporate Espionage Unauthorized app communications

Relationship to Other Logs

Network Stats Correlation Map

Network Stats ←→ Battery Stats
└─ Mobile radio active time links to data

Network Stats ←→ Package Manager
└─ UIDs map to installed apps

Network Stats ←→ Usage Stats
└─ Foreground/background aligns with usage

Network Stats ←→ Connection Manager

└─ Network state changes correlate

Powerful Correlation Techniques

Message Timing Analysis
bash
# Correlate message app data spikes with events
MESSAGING_APPS="whatsapp|telegram|signal"

# Find communication bursts
awk -v apps="$MESSAGING_APPS" '
$0 \~ apps && /uid=/ {
current_app \= 1
}
current_app && /tb=/ {
if (match($0, /st=([0-9]+).*tb=([0-9]+)/, m)) {
if (m[2] > 100000) { # Significant upload
print strftime("%Y-%m-%d %H:%M:%S", m[1]/1000),
"Data sent:", m[2]/1024, "KB"
}
}
}
/^$/ {current_app \= 0}

  1. ' netstats_detail.txt

Location Change Detection
bash
# Track WiFi network changes
grep "networkId=" netstats_detail.txt | \
grep "WIFI" | \
awk '{
if (match($0, /networkId="([^"]+)"/, net) &&
match($0, /st=([0-9]+)/, time)) {
print strftime("%Y-%m-%d %H:%M:%S", time[1]/1000),
"Connected to:", net[1]
}

  1. }' | sort | uniq

Hidden Service Detection
bash
# Find system services with unexpected network usage
awk '
/uid=[0-9]{1,3} / { # System UIDs (0-999)
if (match($0, /uid=([0-9]+)/, u)) {
current_uid \= u[1]
}
}
/tb=/ && current_uid {
if (match($0, /tb=([0-9]+)/, bytes) && bytes[1] > 1048576) {
print "System UID", current_uid, "uploaded",
bytes[1]/1048576, "MB"
}
}

  1. ' netstats_detail.txt

Admissibility Strengths

  1. Technical Reliability
  2. Kernel-level collection (very accurate)
  3. Cannot be modified by apps
  4. Cryptographically signed on some devices
  5. Industry-standard collection method
  6. Metadata Only
  7. No content captured
  8. Only size, time, and app attribution
  9. Less privacy-invasive than packet capture
  10. Generally requires less stringent warrant

Court Presentation Strategies

python
#!/usr/bin/env python3
# create_network_timeline.py

import re
from datetime import datetime
import json

class NetworkForensicAnalyzer:
def __init__(self, netstats_file):
self.netstats_file \= netstats_file
self.events \= []

def parse\_network\_events(self):  
    """Extract significant network events"""

    with open(self.netstats\_file, 'r') as f:  
        content \= f.read()

    *\# Find all data transfers over threshold*  
    pattern \= r'uid=(\\d+).\*?st=(\\d+).\*?rb=(\\d+).\*?tb=(\\d+)'

    for match in re.finditer(pattern, content, re.DOTALL):  
        uid \= match.group(1)  
        timestamp \= int(match.group(2))  
        rx\_bytes \= int(match.group(3))  
        tx\_bytes \= int(match.group(4))

        *\# Flag significant transfers*  
        if rx\_bytes \> 1048576 or tx\_bytes \> 1048576:  *\# \> 1MB*  
            self.events.append({  
                'timestamp': datetime.fromtimestamp(timestamp/1000),  
                'uid': uid,  
                'downloaded\_mb': round(rx\_bytes / 1048576, 2),  
                'uploaded\_mb': round(tx\_bytes / 1048576, 2),  
                'total\_mb': round((rx\_bytes \+ tx\_bytes) / 1048576, 2)  
            })

def generate\_court\_report(self):  
    """Create court-friendly report"""

    print("NETWORK ACTIVITY FORENSIC REPORT")  
    print("=" \* 50)  
    print(f"Analysis Date: {datetime.now()}")  
    print(f"Total Events Analyzed: {len(self.events)}")  
    print("\\nSIGNIFICANT DATA TRANSFERS:")  
    print("-" \* 50)

    *\# Sort by timestamp*  
    for event in sorted(self.events, key\=lambda x: x\['timestamp'\]):  
        print(f"\\nTime: {event\['timestamp'\]}")  
        print(f"Application UID: {event\['uid'\]}")  
        print(f"Data Downloaded: {event\['downloaded\_mb'\]} MB")  
        print(f"Data Uploaded: {event\['uploaded\_mb'\]} MB")

        *\# Flag suspicious patterns*  
        if event\['uploaded\_mb'\] \> event\['downloaded\_mb'\] \* 10:  
            print("⚠️  ALERT: Unusual upload ratio detected")

        if event\['timestamp'\].hour \>= 0 and event\['timestamp'\].hour \<= 5:

            print("⚠️

Retry
MS

Continue

Edit
python
print("⚠️ ALERT: Activity during sleeping hours")

def export_for_visualization(self, output_file):
"""Export data for timeline visualization tools"""

   timeline\_data \= \[\]  
   for event in self.events:  
       timeline\_data.append({  
           'timestamp': event\['timestamp'\].isoformat(),  
           'uid': event\['uid'\],  
           'event\_type': 'network\_transfer',  
           'details': {  
               'downloaded\_mb': event\['downloaded\_mb'\],  
               'uploaded\_mb': event\['uploaded\_mb'\],  
               'suspicious': event\['uploaded\_mb'\] \> 50 or   
                            (event\['timestamp'\].hour \>= 0 and   
                             event\['timestamp'\].hour \<= 5)  
           }  
       })

   with open(output\_file, 'w') as f:  
       json.dump(timeline\_data, f, indent\=2)

# Usage
analyzer \= NetworkForensicAnalyzer('netstats_detail.txt')
analyzer.parse_network_events()
analyzer.generate_court_report()

analyzer.export_for_visualization('network_timeline.json')

Privacy and Warrant Considerations

  1. Scope Definition
  2. Network statistics are metadata, not content
  3. Less invasive than packet capture
  4. Still reveals communication patterns
  5. May require specific warrant language

Chain of Custody
bash
# Document network state during extraction
echo "=== Network Forensic Extraction \===" > network_extraction_log.txt
echo "Date: $(date)" >> network_extraction_log.txt
echo "Device: $(adb shell getprop ro.product.model)" >> network_extraction_log.txt
echo "Current Network:" >> network_extraction_log.txt
adb shell dumpsys connectivity | grep "Active network" >> network_extraction_log.txt

# Extract with hash
adb shell dumpsys netstats detail | tee netstats_evidence.txt | \

  1. sha256sum > netstats_evidence.sha256

Common Challenges and Solutions

Challenge Solution
UID to app mapping Cross-reference with package manager dump
Time zone confusion Network stats use device local time
VPN obfuscation Check tun0 interface, note limitations
Missing historical data Limited to 90-day retention
Tethering detection Check for forwarded traffic patterns
Background vs foreground Use set=BACKGROUND/FOREGROUND filters

Advanced Analysis Techniques

Behavioral Pattern Recognition

bash
#!/bin/bash
# detect_communication_patterns.sh

echo "=== Communication Pattern Analysis \==="

# 1. Identify regular communication patterns (beaconing)
awk '
/uid=10[0-9]+/ {
if (match($0, /uid=([0-9]+)/, u)) {
current_uid \= u[1]
}
}

/st=/ && current_uid {
if (match($0, /st=([0-9]+)/, t)) {
# Store timestamps per UID
uid_times[current_uid] \= uid_times[current_uid] " " t[1]
}
}

END {
print "Checking for regular communication patterns..."
for (uid in uid_times) {
split(uid_times[uid], times, " ")

  \# Check for regular intervals  
  regular \= 1  
  if (length(times) \> 5\) {  
    for (i \= 3; i \<= length(times); i++) {  
      interval1 \= times\[i\] \- times\[i-1\]  
      interval2 \= times\[i-1\] \- times\[i-2\]

      \# Allow 10% variance  
      if (interval1 \< interval2 \* 0.9 || interval1 \> interval2 \* 1.1) {  
        regular \= 0  
        break  
      }  
    }

    if (regular) {  
      print "UID", uid, "shows regular communication pattern"  
      print "  Interval:", interval1/1000, "seconds"  
    }  
  }  
}

}
' netstats_detail.txt

# 2. Detect data staging (gradual accumulation then burst)
echo -e "\n\=== Data Staging Detection \==="
awk '
/uid=/ {
if (match($0, /uid=([0-9]+)/, u)) {
current_uid \= u[1]
}
}

/tb=/ && current_uid {
if (match($0, /st=([0-9]+).*tb=([0-9]+)/, m)) {
time_slot \= int(m[1] / 3600000) # Hour slots
uploads[current_uid][time_slot] += m[2]
}
}

END {
for (uid in uploads) {
max_upload \= 0
total_upload \= 0
count \= 0

  for (slot in uploads\[uid\]) {  
    count++  
    total\_upload \+= uploads\[uid\]\[slot\]  
    if (uploads\[uid\]\[slot\] \> max\_upload) {  
      max\_upload \= uploads\[uid\]\[slot\]  
    }  
  }

  if (count \> 5 && max\_upload \> total\_upload \* 0.5) {  
    print "UID", uid, "shows data staging pattern"  
    print "  Burst upload:", max\_upload/1048576, "MB"  
    print "  Total upload:", total\_upload/1048576, "MB"  
  }  
}

}

' netstats_detail.txt

Covert Channel Detection

python
#!/usr/bin/env python3
# detect_covert_channels.py

import re
from collections import defaultdict
from statistics import mean, stdev

class CovertChannelDetector:
def __init__(self, netstats_file):
self.netstats_file \= netstats_file
self.uid_patterns \= defaultdict(list)

def analyze\_traffic\_patterns(self):  
    """Detect potential covert channels"""

    with open(self.netstats\_file, 'r') as f:  
        content \= f.read()

    *\# Parse all transfers*  
    pattern \= r'uid=(\\d+).\*?st=(\\d+).\*?rb=(\\d+).\*?rp=(\\d+).\*?tb=(\\d+).\*?tp=(\\d+)'

    for match in re.finditer(pattern, content, re.DOTALL):  
        uid \= match.group(1)  
        timestamp \= int(match.group(2))  
        rx\_bytes \= int(match.group(3))  
        rx\_packets \= int(match.group(4))  
        tx\_bytes \= int(match.group(5))  
        tx\_packets \= int(match.group(6))

        *\# Calculate bytes per packet*  
        if rx\_packets \> 0:  
            rx\_bpp \= rx\_bytes / rx\_packets  
        else:  
            rx\_bpp \= 0

        if tx\_packets \> 0:  
            tx\_bpp \= tx\_bytes / tx\_packets  
        else:  
            tx\_bpp \= 0

        self.uid\_patterns\[uid\].append({  
            'timestamp': timestamp,  
            'rx\_bpp': rx\_bpp,  
            'tx\_bpp': tx\_bpp,  
            'rx\_bytes': rx\_bytes,  
            'tx\_bytes': tx\_bytes  
        })

def detect\_anomalies(self):  
    """Find suspicious patterns"""

    print("=== Covert Channel Detection Report \===\\n")

    for uid, patterns in self.uid\_patterns.items():  
        if len(patterns) \< 10:  
            continue

        *\# Check for DNS tunneling (many small packets)*  
        small\_packet\_count \= sum(1 for p in patterns   
                               if 0 \< p\['tx\_bpp'\] \< 100 and p\['tx\_bytes'\] \> 1000)

        if small\_packet\_count \> len(patterns) \* 0.7:  
            print(f"UID {uid}: Possible DNS tunneling detected")  
            print(f"  {small\_packet\_count} transfers with small packet sizes")

        *\# Check for timing channels (regular intervals)*  
        if len(patterns) \> 20:  
            intervals \= \[\]  
            for i in range(1, len(patterns)):  
                interval \= patterns\[i\]\['timestamp'\] \- patterns\[i\-1\]\['timestamp'\]  
                intervals.append(interval)

            if intervals:  
                avg\_interval \= mean(intervals)  
                std\_interval \= stdev(intervals) if len(intervals) \> 1 else 0

                if std\_interval \< avg\_interval \* 0.1:  *\# Very regular*  
                    print(f"UID {uid}: Possible timing channel detected")  
                    print(f"  Regular intervals: {avg\_interval/1000:.1f}s ± {std\_interval/1000:.1f}s")

        *\# Check for steganography (unusual packet sizes)*  
        packet\_sizes \= \[p\['tx\_bpp'\] for p in patterns if p\['tx\_bpp'\] \> 0\]  
        if packet\_sizes:  
            unique\_sizes \= len(set(packet\_sizes))  
            if unique\_sizes \> len(packet\_sizes) \* 0.8:  
                print(f"UID {uid}: Unusual packet size variation")  
                print(f"  {unique\_sizes} unique packet sizes in {len(packet\_sizes)} transfers")

# Usage
detector \= CovertChannelDetector('netstats_detail.txt')
detector.analyze_traffic_patterns()

detector.detect_anomalies()

Geolocation Through Network Analysis

bash
# Extract location clues from network identifiers
echo "=== Location Intelligence from Network Stats \==="

# 1. WiFi SSID timeline (location changes)
grep -E "type=WIFI.*networkId=" netstats_detail.txt | \
awk '{
if (match($0, /networkId="([^"]+)"/, net) &&
match($0, /st=([0-9]+)/, time)) {
# Decode SSID
ssid \= net[1]
timestamp \= strftime("%Y-%m-%d %H:%M:%S", time[1]/1000)

  if (ssid \!= last\_ssid) {  
    print timestamp, "- Connected to WiFi:", ssid  
    last\_ssid \= ssid  
  }  
}

}' > wifi_location_timeline.txt

# 2. Mobile network changes (tower switches)
grep -E "type=MOBILE.*subType=" netstats_detail.txt | \
awk '{
if (match($0, /subType=([^,}]+)/, type) &&
match($0, /st=([0-9]+)/, time)) {
network_type \= type[1]
timestamp \= strftime("%Y-%m-%d %H:%M:%S", time[1]/1000)

  if (network\_type \!= last\_type) {  
    print timestamp, "- Mobile network:", network\_type  
    last\_type \= network\_type  
  }  
}

}' > mobile_network_changes.txt

# 3. Known location SSIDs
cat > known_locations.txt \<\< EOF
HomeWiFi|User residence
OfficeGuest|Workplace
Starbucks|Coffee shop
Airport_Free|Airport
EOF

# Match against known locations
while IFS\='|' read ssid location; do
grep "$ssid" wifi_location_timeline.txt | \
sed "s/$ssid/$ssid ($location)/"

done \< known_locations.txt > identified_locations.txt

Integration with Other Tools

Network Forensics Toolkit Integration

bash
#!/bin/bash
# export_for_analysis.sh

# 1. Export for Wireshark correlation
echo "Creating PCAP correlation data..."
awk '
/st=/ && /rb=/ && /tb=/ {
if (match($0, /st=([0-9]+)/, t) &&
match($0, /uid=([0-9]+)/, u) &&
match($0, /rb=([0-9]+)/, rx) &&
match($0, /tb=([0-9]+)/, tx)) {
printf "%s,%s,%s,%s\n",
strftime("%Y-%m-%d %H:%M:%S", t[1]/1000),
u[1], rx[1], tx[1]
}
}
' netstats_detail.txt > network_stats_for_pcap.csv

# 2. Create timeline for Plaso/Timeline Explorer
echo "Creating timeline entries..."
cat > network_timeline.csv \<\< 'HEADER'
Date,Time,Source,Type,User,Activity,Filename,Description,Other
HEADER

awk '
/uid=10[0-9]+/ && /tb=/ {
if (match($0, /st=([0-9]+)/, t) &&
match($0, /uid=([0-9]+)/, u) &&
match($0, /tb=([0-9]+)/, tx) &&
tx[1] > 1048576) {
date \= strftime("%Y-%m-%d", t[1]/1000)
time \= strftime("%H:%M:%S", t[1]/1000)
desc \= sprintf("UID %s uploaded %d MB", u[1], tx[1]/1048576)

  printf "%s,%s,NetworkStats,DataTransfer,UID%s,Upload,,\\"%s\\",\\n",  
    date, time, u\[1\], desc  
}

}

' netstats_detail.txt >> network_timeline.csv

Conclusion

Network statistics logs provide invaluable forensic evidence about device communication patterns, data exfiltration, and user behavior. Key forensic values include:

  1. Complete Traffic Attribution: Every byte is attributed to specific apps
  2. Historical Persistence: 90-day retention captures long-term patterns
  3. Metadata Rich: Reveals when, how much, and which apps communicated
  4. Tamper Resistant: Kernel-level collection prevents app manipulation
  5. Privacy Balanced: No content capture, only metadata

When combined with other log sources, network statistics can:

  • Prove or disprove data theft allegations
  • Establish communication timelines
  • Detect malware command and control
  • Reveal hidden app activities
  • Support location intelligence
  • Identify behavioral patterns

The systematic analysis of network statistics, especially when correlated with battery stats, usage stats, and package information, provides a comprehensive view of device network activity that is essential for modern mobile forensics investigations.