Dynamic Pricing With AI: A Growth Hacker's Guide
Amazon changes prices 2.5 million times a day. That's not a team of analysts with spreadsheets. It's machine learning models that ingest real-time data on demand, competition, inventory, and customer behaviour, then output optimal price points per product per moment. McKinsey research shows dynamic pricing increases revenue by an average of 5% without significant capital investment. Other studies report margin improvements of up to 10% and inventory turnover gains of 20%. The dynamic pricing software market is growing at over 31% annually. But here's the thing most articles on dynamic pricing miss: it's not about changing prices wildly. It's about understanding price elasticity — how much demand changes when you nudge the price — and using that understanding to find the sweet spot between revenue, margin, and customer trust. This post is a practical guide. I'll walk through the ML concepts, show you the Python scripts that calculate price elasticity from your own data, and give you the Solidus/Rails implementation that connects it all into a working pricing engine with proper guardrails. No client-specific examples this time — just clean, reusable patterns you can adapt to any ecommerce store running on Rails.
Price Elasticity: The Concept That Makes Everything Else Work
Before we touch any code, you need to understand one concept: price elasticity of demand. It's the most important number in dynamic pricing.
Price elasticity measures how much demand changes when you change the price. The formula is simple:
Elasticity = (% change in quantity sold) / (% change in price)
If a product has an elasticity of -1.5, a 10% price increase leads to a 15% drop in units sold. If it's -0.3, a 10% price increase only causes a 3% drop — demand barely notices.
This matters because it tells you where to be aggressive and where to be careful. Products with low elasticity (close to zero) are candidates for price increases — customers will pay more without buying less. Products with high elasticity (far from zero) need competitive pricing — every penny matters.
Here's the practical bit: you can calculate elasticity from your own order data. You don't need a PhD. You need a product that's had at least a few price changes and enough orders to see the pattern.
Python: Calculating Price Elasticity From Your Data
This script takes order data exported from Solidus (or any ecommerce platform) and estimates price elasticity per product using log-log regression:
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import json
def calculate_elasticity(orders_csv: str, min_price_changes: int = 3,
min_orders: int = 30) -> dict:
"""
Calculate price elasticity per product from order history.
Export from Solidus: product_id, price, quantity, order_date
"""
df = pd.read_csv(orders_csv, parse_dates=['order_date'])
# Aggregate to weekly level to smooth noise
df['week'] = df['order_date'].dt.isocalendar().week
df['year'] = df['order_date'].dt.year
weekly = df.groupby(['product_id', 'year', 'week']).agg(
avg_price=('price', 'mean'),
total_quantity=('quantity', 'sum')
).reset_index()
results = {}
for product_id, group in weekly.groupby('product_id'):
# Need enough data points and price variation
if len(group) < min_orders:
continue
if group['avg_price'].nunique() < min_price_changes:
continue
# Log-log regression: ln(Q) = a + b * ln(P)
# Coefficient b IS the elasticity
log_price = np.log(group['avg_price'].values).reshape(-1, 1)
log_quantity = np.log(group['total_quantity'].values + 1)
model = LinearRegression()
model.fit(log_price, log_quantity)
elasticity = model.coef_[0]
r_squared = model.score(log_price, log_quantity)
results[str(product_id)] = {
'elasticity': round(elasticity, 3),
'r_squared': round(r_squared, 3),
'data_points': len(group),
'price_range': {
'min': round(group['avg_price'].min(), 2),
'max': round(group['avg_price'].max(), 2)
},
'interpretation': classify_elasticity(elasticity)
}
return results
def classify_elasticity(e: float) -> str:
abs_e = abs(e)
if abs_e < 0.5:
return 'highly_inelastic' # Price increase opportunity
elif abs_e < 1.0:
return 'inelastic' # Moderate price flexibility
elif abs_e < 1.5:
return 'elastic' # Price sensitive
else:
return 'highly_elastic' # Very price sensitive
if __name__ == '__main__':
results = calculate_elasticity('solidus_orders_export.csv')
print(json.dumps(results, indent=2))
# Quick summary
for pid, data in results.items():
print(f"Product {pid}: elasticity={data['elasticity']}, "
f"type={data['interpretation']}, "
f"R²={data['r_squared']}")
The output tells you exactly which products can tolerate price increases and which can't. A product with elasticity of -0.3 and good R² is a candidate for a 5-10% price increase with minimal volume loss. A product at -2.5 needs to stay competitively priced or you'll haemorrhage sales.
Python: Demand-Aware Optimal Price Prediction
Once you know elasticity, you can build a model that predicts the optimal price given current conditions. This XGBoost script incorporates multiple signals:
import pandas as pd
import numpy as np
from xgboost import XGBRegressor
from sklearn.model_selection import TimeSeriesSplit
from sklearn.metrics import mean_absolute_error
def train_pricing_model(features_csv: str):
"""
Train a pricing model that predicts optimal price.
Features CSV should include per product per day:
- product_id, date, price_charged, units_sold, revenue
- competitor_min_price, competitor_avg_price
- inventory_level, days_of_stock_remaining
- day_of_week, month, is_holiday, is_promotional_period
- marketing_spend_daily
- category_demand_index (relative demand vs 30-day avg)
"""
df = pd.read_csv(features_csv, parse_dates=['date'])
# Target: revenue per unit (proxy for optimal price point)
# We want to maximise total revenue, not just price
df['revenue_per_unit'] = df['revenue'] / df['units_sold'].clip(lower=1)
feature_cols = [
'competitor_min_price', 'competitor_avg_price',
'inventory_level', 'days_of_stock_remaining',
'day_of_week', 'month', 'is_holiday',
'is_promotional_period', 'marketing_spend_daily',
'category_demand_index'
]
X = df[feature_cols]
y = df['revenue_per_unit']
# Time-series aware cross-validation
tscv = TimeSeriesSplit(n_splits=5)
scores = []
for train_idx, test_idx in tscv.split(X):
X_train, X_test = X.iloc[train_idx], X.iloc[test_idx]
y_train, y_test = y.iloc[train_idx], y.iloc[test_idx]
model = XGBRegressor(
n_estimators=200,
max_depth=6,
learning_rate=0.05,
subsample=0.8,
colsample_bytree=0.8
)
model.fit(X_train, y_train,
eval_set=[(X_test, y_test)],
verbose=False)
predictions = model.predict(X_test)
mae = mean_absolute_error(y_test, predictions)
scores.append(mae)
print(f"Cross-validated MAE: {np.mean(scores):.2f}")
# Train final model on all data
final_model = XGBRegressor(
n_estimators=200, max_depth=6,
learning_rate=0.05, subsample=0.8
)
final_model.fit(X, y)
# Feature importance tells you what drives pricing
importance = dict(zip(feature_cols,
final_model.feature_importances_))
print("\nFeature importance:")
for feat, imp in sorted(importance.items(),
key=lambda x: x[1], reverse=True):
print(f" {feat}: {imp:.3f}")
return final_model
def predict_optimal_price(model, current_conditions: dict) -> float:
"""Given current market conditions, predict optimal price."""
features = pd.DataFrame([current_conditions])
return model.predict(features)[0]
The feature importance output is genuinely useful. For one product category, you might find that competitor_min_price and days_of_stock_remaining are the dominant signals — meaning you should price relative to competition and accelerate clearance as stock ages. For another, category_demand_index dominates — meaning demand-based surge pricing works well.
The Solidus/Rails Pricing Engine
Now let's wire these Python models into a Solidus store. The architecture: a Rails service layer that manages pricing rules, calls Python models for predictions, and enforces guardrails before any price change reaches the shop floor.
module DynamicPricing
class Engine
# Guardrails - these prevent the algorithm from going rogue
DEFAULT_GUARDRAILS = {
max_increase_pct: 15.0, # Never raise more than 15% at once
max_decrease_pct: 25.0, # Never drop more than 25% at once
max_daily_changes: 3, # Max 3 price changes per product per day
min_margin_pct: 10.0, # Never go below 10% margin
review_threshold_pct: 8.0, # Flag for human review above 8% change
cooldown_hours: 4 # Minimum hours between changes
}.freeze
def initialize(guardrails: {})
@guardrails = DEFAULT_GUARDRAILS.merge(guardrails)
@python_bridge = PythonBridge.new
end
def calculate_price(variant:)
current = variant.price.to_f
cost = variant.cost_price&.to_f || estimate_cost(variant)
# Gather all signals
signals = gather_signals(variant)
# Get ML prediction
ml_price = @python_bridge.predict_optimal_price(
variant_id: variant.id,
signals: signals
)
# Apply guardrails
constrained = apply_guardrails(
variant: variant,
current_price: current,
proposed_price: ml_price,
cost: cost
)
PriceRecommendation.new(
variant: variant,
current_price: current,
ml_suggested_price: ml_price,
final_price: constrained[:price],
change_pct: constrained[:change_pct],
needs_review: constrained[:needs_review],
signals: signals,
guardrail_applied: constrained[:guardrail_applied],
reasoning: constrained[:reasoning]
)
end
private
def gather_signals(variant)
{
demand: DemandSignals.for_variant(variant),
competition: CompetitorSignals.for_variant(variant),
inventory: InventorySignals.for_variant(variant),
elasticity: ElasticityCache.for_variant(variant),
temporal: TemporalSignals.current
}
end
def apply_guardrails(variant:, current_price:, proposed_price:,
cost:)
change_pct = ((proposed_price - current_price) / current_price) * 100
guardrail_applied = nil
reasoning = []
# 1. Margin floor
min_price = cost * (1 + @guardrails[:min_margin_pct] / 100.0)
if proposed_price < min_price
proposed_price = min_price
guardrail_applied = :margin_floor
reasoning << "Price raised to maintain #{@guardrails[:min_margin_pct]}% minimum margin"
end
# 2. Maximum increase cap
max_up = current_price * (1 + @guardrails[:max_increase_pct] / 100.0)
if proposed_price > max_up
proposed_price = max_up
guardrail_applied = :max_increase
reasoning << "Capped at #{@guardrails[:max_increase_pct]}% maximum increase"
end
# 3. Maximum decrease cap
max_down = current_price * (1 - @guardrails[:max_decrease_pct] / 100.0)
if proposed_price < max_down
proposed_price = max_down
guardrail_applied = :max_decrease
reasoning << "Floored at #{@guardrails[:max_decrease_pct]}% maximum decrease"
end
# 4. Cooldown period
last_change = PriceChangeLog.latest_for(variant)
if last_change && last_change.created_at > @guardrails[:cooldown_hours].hours.ago
proposed_price = current_price
guardrail_applied = :cooldown
reasoning << "Within #{@guardrails[:cooldown_hours]}h cooldown period"
end
# 5. Daily change limit
changes_today = PriceChangeLog.today_count_for(variant)
if changes_today >= @guardrails[:max_daily_changes]
proposed_price = current_price
guardrail_applied = :daily_limit
reasoning << "Reached #{@guardrails[:max_daily_changes]} daily change limit"
end
final_change_pct = ((proposed_price - current_price) / current_price) * 100
needs_review = final_change_pct.abs > @guardrails[:review_threshold_pct]
{
price: proposed_price.round(2),
change_pct: final_change_pct.round(2),
needs_review: needs_review,
guardrail_applied: guardrail_applied,
reasoning: reasoning
}
end
end
end
The Signal Gatherers
Each signal source is a separate module, making it easy to add new data sources:
module DynamicPricing
class DemandSignals
def self.for_variant(variant)
{
# Sales velocity: units per day over last 7/30/90 days
velocity_7d: sales_velocity(variant, 7),
velocity_30d: sales_velocity(variant, 30),
velocity_trend: velocity_trend(variant),
# Search and browse signals
product_views_7d: PageView.for_product(variant.product, 7.days),
search_impressions_7d: SearchLog.impressions_for(variant.product, 7.days),
cart_adds_7d: CartEvent.adds_for(variant, 7.days),
# Inventory pressure
stock_level: variant.total_on_hand,
days_of_stock: days_of_stock_remaining(variant),
overstock: overstock?(variant),
# Demand index: current vs historical
demand_index: demand_index(variant)
}
end
private
def self.demand_index(variant)
current_velocity = sales_velocity(variant, 7)
historical_velocity = sales_velocity(variant, 90)
return 1.0 if historical_velocity.zero?
(current_velocity / historical_velocity).round(2)
end
def self.days_of_stock_remaining(variant)
velocity = sales_velocity(variant, 30)
return 999 if velocity.zero?
(variant.total_on_hand / velocity).round(1)
end
end
class CompetitorSignals
def self.for_variant(variant)
prices = CompetitorPrice
.where(product_id: variant.product_id)
.where('scraped_at > ?', 24.hours.ago)
return {} if prices.empty?
{
min_price: prices.minimum(:price).to_f,
max_price: prices.maximum(:price).to_f,
avg_price: prices.average(:price).to_f,
competitor_count: prices.select(:competitor_name).distinct.count,
our_position: price_position(variant.price, prices),
cheapest_in_stock: cheapest_in_stock_competitor(prices)
}
end
end
end
Python: Simple Competitor Price Scraper
Here's a basic competitor price monitoring script. In production you'd use a proper scraping service, but this shows the pattern:
import requests
from bs4 import BeautifulSoup
import json
import time
from datetime import datetime
def scrape_competitor_prices(products: list,
competitors: list) -> list:
"""
Basic competitor price scraper.
In production, use a service like Prisync, Competera, or
build on ScrapingBee/Bright Data for anti-bot handling.
products: [{'sku': 'ABC123', 'name': 'Blue Widget', 'ean': '123456'}]
competitors: [{'name': 'ShopA', 'search_url': '...', 'selectors': {...}}]
"""
results = []
for product in products:
for competitor in competitors:
try:
price = fetch_competitor_price(
product, competitor
)
if price:
results.append({
'product_sku': product['sku'],
'competitor': competitor['name'],
'price': price,
'currency': 'GBP',
'scraped_at': datetime.utcnow().isoformat(),
'in_stock': True
})
except Exception as e:
print(f"Error scraping {competitor['name']} "
f"for {product['sku']}: {e}")
# Be polite
time.sleep(2)
return results
def fetch_competitor_price(product, competitor):
"""Fetch a single competitor price via search."""
url = competitor['search_url'].format(
query=product['name']
)
headers = {'User-Agent': 'PriceMonitorBot/1.0'}
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
price_el = soup.select_one(competitor['selectors']['price'])
if price_el:
price_text = price_el.get_text(strip=True)
# Strip currency symbols, parse float
price = float(
price_text.replace('\u00a3', '')
.replace(',', '')
.strip()
)
return price
return None
Applying Price Changes in Solidus
Once the engine produces a recommendation, you need to actually change the price. This should always go through a controlled pipeline:
module DynamicPricing
class PriceApplicator
def apply(recommendation)
return skip(recommendation, 'No change needed') if
recommendation.change_pct.abs < 0.5
return queue_for_review(recommendation) if
recommendation.needs_review
execute_price_change(recommendation)
end
private
def execute_price_change(recommendation)
variant = recommendation.variant
old_price = variant.price
ActiveRecord::Base.transaction do
variant.update!(price: recommendation.final_price)
PriceChangeLog.create!(
variant: variant,
old_price: old_price,
new_price: recommendation.final_price,
change_pct: recommendation.change_pct,
model_suggested: recommendation.ml_suggested_price,
guardrail_applied: recommendation.guardrail_applied,
signals_snapshot: recommendation.signals.to_json,
reasoning: recommendation.reasoning,
applied_by: :algorithm
)
end
end
def queue_for_review(recommendation)
PriceReviewQueue.create!(
variant: recommendation.variant,
recommendation: recommendation.to_json,
reason: "Change of #{recommendation.change_pct}% exceeds review threshold",
status: :pending
)
end
end
end
The Scheduler: When to Run Pricing
Dynamic pricing doesn't mean frantic pricing. For most ecommerce stores, running the engine once or twice daily is plenty:
module DynamicPricing
class Scheduler
# Run via cron: rake dynamic_pricing:run
def run(scope: :all)
variants = case scope
when :all then Spree::Variant.active.in_stock
when :high_velocity then high_velocity_variants
when :overstock then overstocked_variants
else Spree::Variant.where(id: scope)
end
engine = Engine.new
applicator = PriceApplicator.new
results = { applied: 0, reviewed: 0, skipped: 0, errors: 0 }
variants.find_each do |variant|
recommendation = engine.calculate_price(variant: variant)
outcome = applicator.apply(recommendation)
results[outcome] += 1
rescue StandardError => e
Rails.logger.error("Pricing error for variant #{variant.id}: #{e.message}")
results[:errors] += 1
end
notify_summary(results)
results
end
end
end
Three Pricing Strategies (Pick the One That Fits)
Demand-based pricing adjusts prices based on how fast products are selling relative to their historical rate. If a product is selling twice as fast as usual (demand_index > 2.0), nudge the price up. If it's selling at half the rate (demand_index < 0.5), consider a reduction or promotion. Best for: seasonal products, trending items, flash sales.
Competitive pricing anchors to what competitors charge and positions you relative to the market. If your price is 15% above the cheapest competitor and your product isn't differentiated enough to justify it, the algorithm brings it closer. Best for: commodity products, price-comparison-heavy categories, marketplaces.
Margin-optimising pricing focuses on maximising margin within competitive and demand constraints. It uses elasticity data to find the highest price the market will tolerate without significant volume loss. Best for: differentiated products, luxury goods, products with strong brand loyalty.
In practice, most stores use a blend. The engine picks the strategy per product based on its characteristics:
module DynamicPricing
class StrategySelector
def select(variant:, signals:)
elasticity = signals.dig(:elasticity, :value) || -1.0
competitor_count = signals.dig(:competition, :competitor_count) || 0
demand_index = signals.dig(:demand, :demand_index) || 1.0
if demand_index > 1.8 || demand_index < 0.4
:demand_based # Strong demand signal, respond to it
elsif competitor_count > 3 && elasticity.abs > 1.5
:competitive # Many competitors + price sensitive
else
:margin_optimised # Default: maximise margin
end
end
end
end
The Guardrails That Actually Matter
The ML model will sometimes suggest things that are technically optimal but commercially insane. A story to illustrate: imagine the model detects a sudden demand spike for a product (it went viral on social media). The pure optimisation says: raise the price 40%. The model is mathematically right — at that demand level, you'd maximise revenue even at the higher price.
But your customers would be furious. They'd screenshot the price change, share it on social media, and your brand would take a hit that no short-term revenue gain compensates for.
That's why guardrails aren't optional. They're the most important part of the system:
Maximum change limits (per change and per day) prevent whiplash. Cooldown periods stop the algorithm from reacting to noise. Margin floors ensure you never sell below cost regardless of competitive pressure. Human review thresholds catch the edge cases where the algorithm is right but the context requires judgement. Full audit logging means you can explain every price change — critical for regulatory compliance and customer trust.
Getting Started
If you're running a Solidus store (or any Rails ecommerce platform) and want to experiment with dynamic pricing:
-
Export your order history. You need: product, price at time of sale, quantity, date. That's the minimum for elasticity estimation.
-
Run the elasticity script. Identify which products are elastic (price sensitive) and which are inelastic (price tolerant). This alone is worth doing even if you never automate anything — it tells you where manual price increases won't hurt.
-
Start with rules, not ML. Before deploying a model, implement simple rules: "If days_of_stock < 14 and velocity is declining, reduce price 5%" or "If demand_index > 1.5, hold price firm." Rules are transparent and controllable.
-
Add ML when rules feel limiting. Once you've seen the pattern of what works, train the XGBoost model on your data. The model will find non-obvious combinations of signals that rules can't capture.
-
Always log everything. Every price change, every signal snapshot, every guardrail intervention. This is your learning dataset for the next model iteration, and it's your audit trail for when someone asks why a price changed.