You've seen them in server configs, CI/CD pipelines, and cloud dashboards: cryptic strings like 30 4 * * 6 or 0 */6 * * *. These are cron expressions — the scheduling language that powers virtually every automated task on Linux systems and cloud platforms. If you can read cron, you can understand when any scheduled job runs, debug timing issues, and write schedules with confidence.
This guide teaches cron expressions through 5 real-world scenarios. Instead of memorizing syntax, you'll learn by doing — each scenario presents a real operations problem, walks through the cron solution, and explains every field.
🔍 Paste any cron expression and get instant human-readable output + next execution times.
Parse Cron Expression NowA standard cron expression has 5 fields, separated by spaces:
| Field | Values | Description |
|---|---|---|
| Minute | 0-59 | Minute of the hour |
| Hour | 0-23 | Hour of the day (24-hour format) |
| Day of Month | 1-31 | Day of the month |
| Month | 1-12 | Month of the year |
| Day of Week | 0-6 (0 = Sunday) | Day of the week |
Special characters: * (any value), , (list), - (range), / (step). Now let's apply this to real scenarios.
The problem: You manage a fleet of Linux servers. Every Sunday at 3:00 AM, you need to run system updates, clean package caches, and check disk space. During business hours, these tasks would slow down production. You need a cron schedule that runs reliably every weekend.
Field breakdown:
0 — Minute: exactly at :003 — Hour: 3 AM (off-peak, before Monday starts)* — Day of month: every day of the month* — Month: every month0 — Day of week: Sunday (0 = Sunday in standard cron)The crontab entry:
# Weekly system maintenance
0 3 * * 0 /usr/local/bin/system-maintenance.sh >> /var/log/maintenance.log 2>&1
Pro tips for this scenario:
>> and 2>&1 so you can debug failures.MAILTO=admin@example.com in your crontab to receive alerts on failures.lockfile or flock to prevent overlapping runs if the script takes longer than a week./usr/local/bin/system-maintenance.shThe problem: Your PostgreSQL database needs daily backups. Backups should run at 2:30 AM every day (after the nightly ETL pipeline finishes at 2:00 AM). You also want weekly full backups on Saturdays and monthly archive backups on the 1st of each month.
Daily incremental backup:
Weekly full backup:
Monthly archive:
The crontab entries:
# Daily incremental backup
30 2 * * * pg_dump -Fc mydb | gzip > /backups/daily/mydb-$(date +\%Y\%m\%d).sql.gz
# Weekly full backup
0 3 * * 6 pg_dump -Fc mydb | gzip > /backups/weekly/mydb-week-$(date +\%Y\%W).sql.gz
# Monthly archive
0 4 1 * * pg_dump -Fc mydb | gzip > /backups/monthly/mydb-$(date +\%Y\%m).sql.gz
Key insight: The $(date +\%Y\%m\%d) creates timestamped filenames. Note the \% escaping — cron requires percent signs to be escaped in commands.
Rotation strategy: Add a cleanup job that removes backups older than 30 days:
0 5 * * * find /backups/daily -name "*.sql.gz" -mtime +30 -delete
The problem: You have a microservices architecture where Service A is the source of truth for user profiles. Service B needs to sync user data every 15 minutes during business hours (9 AM - 6 PM, Monday through Friday). Running the sync 24/7 would waste resources; running it only during business hours keeps the data fresh enough for real-time features.
Field breakdown:
*/15 — Every 15 minutes (0, 15, 30, 45)9-18 — Hours 9 through 18 inclusive* — Any day of month* — Any month1-5 — Monday (1) through Friday (5)Next execution times for this schedule:
Mon 09:00, Mon 09:15, Mon 09:30, ..., Mon 18:00, Mon 18:15, Mon 18:30, Mon 18:45
Tue 09:00, Tue 09:15, ...
(No executions on weekends or outside 9 AM - 6:45 PM)
Pro tips:
timeout 300 /usr/local/bin/sync-users.shflock -n to skip the sync if the previous one is still running.The problem: Your sales team wants a weekly performance summary email every Monday at 8:00 AM, giving them time to review before the Monday standup at 9:30 AM. They also want a monthly executive report on the last day of every month at 4:00 PM.
Weekly sales report:
Monthly executive report:
The tricky part: Cron doesn't have a "last day of month" operator. Using 28-31 means the job runs on days 28, 29, 30, and 31. To run only on the actual last day, wrap the command in a shell check:
# Monthly executive report (runs on actual last day of month)
0 16 28-31 * * [ $(date -d "+1 day" +\%d) -eq 01 ] && /usr/local/bin/executive-report.sh
This checks if tomorrow is the 1st — if so, today must be the last day of the month. Elegant and reliable.
The problem: Your application generates 500MB of logs per day across multiple services. Without cleanup, disk space fills up in weeks. You need to rotate and compress logs daily, and delete old logs after 90 days. The cleanup should run at midnight when traffic is lowest.
The crontab entries:
# Compress yesterday's logs
0 0 * * * find /var/log/app -name "*.log" -mtime +1 -exec gzip {} \;
# Delete logs older than 90 days
0 0 * * * find /var/log/app -name "*.log.gz" -mtime +90 -delete
# Also clean up temp files older than 7 days
0 0 * * * find /tmp/app-* -mtime +7 -delete
Better approach — use logrotate:
While cron works, most Linux systems have logrotate built in, which handles compression, rotation, and retention automatically. Create a config file at /etc/logrotate.d/myapp:
/var/log/app/*.log {
daily
rotate 90
compress
delaycompress
missingok
notifempty
create 0644 www-data www-data
}
Then a single cron entry triggers logrotate daily:
0 0 * * * /usr/sbin/logrotate /etc/logrotate.conf
/usr/bin/python3 instead of python3), or source your profile at the top of your script.% is interpreted as a newline. Escape it as \% in date commands.TZ=America/New_York before the command.flock to prevent parallel execution: * * * * * flock -n /tmp/myjob.lock mycommandMAILTO to receive error notifications.⏰ Decode any cron expression instantly — see human-readable descriptions and next run times.
Parse Your Cron Expression