9 changed files with 656 additions and 123 deletions
@ -1,103 +1,51 @@ |
|||||||
# PROJECT_STATE.md |
# PROJECT_STATE.md |
||||||
|
|
||||||
## Project |
Project: OTB Cloud |
||||||
OTB Cloud |
Version: v1.1.0-alpha3 |
||||||
|
Updated: 2026-04-19 |
||||||
## Current version |
Location: /opt/otb_cloud |
||||||
v0.2.3 |
|
||||||
|
## Current State |
||||||
## Build date |
OTB Cloud now has a functioning workshop-driven video processing pipeline. |
||||||
2026-04-12 |
|
||||||
|
### Confirmed Working |
||||||
## Host |
- Portal and branded UI shell |
||||||
vault3 |
- Device browser |
||||||
|
- File selection flow into Video Workshop |
||||||
## App path |
- Video Workshop page |
||||||
/opt/otb_cloud |
- Enqueue API |
||||||
|
- Jobs API |
||||||
## Purpose |
- MariaDB-backed video_jobs integration |
||||||
Portal-authenticated secure backup and storage platform for customer files, including images, videos, documents, and other uploaded data. |
- Tenant/device path resolution for queued jobs |
||||||
|
- Worker service startup and queue pickup |
||||||
## Current implemented scaffold |
- Worker-side absolute path resolution from tenant storage_root |
||||||
- Portal handoff from OTB Billing |
- Intel iGPU processing path |
||||||
- Branded OTB portal shell styling |
- Successful completed output for device 27 (ripper) |
||||||
- User-created devices |
|
||||||
- Device add/remove |
### Latest Proven Result |
||||||
- Browser upload to device originals |
A queued workshop job for: |
||||||
- Device file browser |
- source file: 05142013003.mp4 |
||||||
- Selection actions |
- device: 27 (ripper) |
||||||
- Soft-delete to deleted folder |
|
||||||
- Recover from deleted folder |
completed successfully with: |
||||||
- Zip workspace staging and zip export |
- assigned_processor: intel |
||||||
- Deleted files page with hard delete |
- status: complete |
||||||
- Exports page |
- progress_percent: 100 |
||||||
|
- output_relative_path: |
||||||
## Retention and safety notes |
devices/ripper/originals/20260413T210325474049Z__05142013003_processed.mp4 |
||||||
- Original files are stored as immutable originals |
|
||||||
- Deleted files are retained in the deleted area for up to 24 hours |
## Known Remaining Improvements |
||||||
- Deleted files can be recovered during that hold window |
- Jobs panel is still raw JSON instead of a polished table/cards view |
||||||
- Deleted files can also be hard-deleted immediately by the user |
- Failed jobs do not yet surface log_excerpt nicely in UI |
||||||
- Recovered files return to originals with `-recovered` appended to filename |
- No direct preview/download button for completed outputs in workshop |
||||||
- Zip staging copies are temporary working copies |
- No health/storage/GPU dashboard panel yet |
||||||
- Successful zip creation clears staged copies but does not affect original source files |
- No explicit processor chooser in UI |
||||||
|
- Output placement may later deserve a dedicated derived/video output area |
||||||
## Immediate next tasks |
- Existing patch helper scripts were moved out of repo to keep git clean |
||||||
1. Add basename-only rename flow |
|
||||||
2. Add searchable file listing |
## Recommended Next Step |
||||||
3. Add bulk folder upload |
Proceed to alpha3-b: |
||||||
4. Add media processing jobs |
- replace raw JSON jobs output with styled job cards/table |
||||||
5. Add derived/original filtering |
- add output links for completed jobs |
||||||
6. Add better single-file actions in browser |
- add visible failure details from log_excerpt |
||||||
|
- add storage/GPU/worker health panel |
||||||
|
|
||||||
## Current update: v0.2.5 |
|
||||||
- Added inline image serving route for browser previews |
|
||||||
- Added device browser view toggle: list or gallery |
|
||||||
- Added gallery cards with thumbnails, preview modal, rename, download, and checkbox actions |
|
||||||
- Existing bulk delete, download, and zip staging continue to work in both views |
|
||||||
|
|
||||||
## v0.2.5 — Gallery View + Image Preview |
|
||||||
|
|
||||||
### Added |
|
||||||
- Gallery view toggle for device file browser |
|
||||||
- Image thumbnail rendering (inline file route) |
|
||||||
- Click-to-preview full image modal |
|
||||||
- Gallery cards with: |
|
||||||
- checkbox selection |
|
||||||
- rename input |
|
||||||
- download button |
|
||||||
- preview button |
|
||||||
|
|
||||||
### Improved |
|
||||||
- File browsing now supports both: |
|
||||||
- list (management) |
|
||||||
- gallery (visual) |
|
||||||
- Bulk actions work in both views |
|
||||||
- Display filename system fully integrated across UI |
|
||||||
|
|
||||||
### Notes |
|
||||||
- Originals remain immutable |
|
||||||
- Thumbnails currently use original images (no derived images yet) |
|
||||||
- Foundation ready for future media processing pipeline |
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Current update: v0.2.8 |
|
||||||
- Added folder-tree browser scoped by current path |
|
||||||
- Added clickable breadcrumbs for direct jumps to any parent folder |
|
||||||
- Added folders-first navigation while preserving list/gallery modes for files in the current folder |
|
||||||
- Browser now reflects preserved backup folder structure instead of flattening all files into one device-wide listing |
|
||||||
|
|
||||||
## v1.1.0-alpha1 — Video System Foundation |
|
||||||
- Added video_jobs table (processing queue) |
|
||||||
- Added tenant_usage_metrics table (dashboard metrics) |
|
||||||
- Added video service scaffolding (jobs, metrics, gpu select, profiles) |
|
||||||
- Extended device structure to include: |
|
||||||
- video |
|
||||||
- video-workshop |
|
||||||
- archive |
|
||||||
- lts |
|
||||||
- Prepared system for background worker architecture |
|
||||||
|
|
||||||
Next step: |
|
||||||
- Build video worker processing engine |
|
||||||
|
|||||||
@ -1,3 +1,30 @@ |
|||||||
|
import os |
||||||
|
|
||||||
|
LOCK_DIR = "/var/lib/otbcloud/locks" |
||||||
|
|
||||||
|
def _lock_path(name): |
||||||
|
return os.path.join(LOCK_DIR, f"{name}.lock") |
||||||
|
|
||||||
|
def is_locked(name): |
||||||
|
return os.path.exists(_lock_path(name)) |
||||||
|
|
||||||
|
def acquire(name): |
||||||
|
os.makedirs(LOCK_DIR, exist_ok=True) |
||||||
|
path = _lock_path(name) |
||||||
|
if os.path.exists(path): |
||||||
|
return False |
||||||
|
with open(path, "w") as f: |
||||||
|
f.write(str(os.getpid())) |
||||||
|
return True |
||||||
|
|
||||||
|
def release(name): |
||||||
|
path = _lock_path(name) |
||||||
|
if os.path.exists(path): |
||||||
|
os.remove(path) |
||||||
|
|
||||||
def select_processor(): |
def select_processor(): |
||||||
# v1.1.0 logic placeholder |
if acquire("intel"): |
||||||
return "intel" |
return "intel" |
||||||
|
if acquire("amd"): |
||||||
|
return "amd" |
||||||
|
return "cpu" |
||||||
|
|||||||
@ -1,9 +1,144 @@ |
|||||||
def create_job(db, tenant_id, device_id, source_path, filename, profile): |
from app.db import get_db |
||||||
return { |
from pathlib import Path |
||||||
"tenant_id": tenant_id, |
|
||||||
"device_id": device_id, |
def get_tenant_row(db, tenant): |
||||||
"source_path": source_path, |
cur = db.cursor() |
||||||
"filename": filename, |
cur.execute( |
||||||
"profile": profile, |
"SELECT id, storage_root FROM tenants WHERE slug = %s LIMIT 1", |
||||||
"status": "queued" |
(tenant,) |
||||||
} |
) |
||||||
|
row = cur.fetchone() |
||||||
|
if not row: |
||||||
|
return None |
||||||
|
return row |
||||||
|
|
||||||
|
def get_device_row(db, device_id): |
||||||
|
cur = db.cursor() |
||||||
|
cur.execute( |
||||||
|
"SELECT id, device_name, relative_path FROM devices WHERE id = %s LIMIT 1", |
||||||
|
(device_id,) |
||||||
|
) |
||||||
|
row = cur.fetchone() |
||||||
|
if not row: |
||||||
|
return None |
||||||
|
return row |
||||||
|
|
||||||
|
def resolve_source_relative_path(storage_root, device_relative_path, input_filename): |
||||||
|
base = Path(storage_root) / device_relative_path |
||||||
|
if not base.exists(): |
||||||
|
raise FileNotFoundError(f"Device base path not found: {base}") |
||||||
|
|
||||||
|
candidates = [] |
||||||
|
|
||||||
|
for p in base.rglob("*"): |
||||||
|
if not p.is_file(): |
||||||
|
continue |
||||||
|
name = p.name |
||||||
|
if name == input_filename or name.endswith("__" + input_filename): |
||||||
|
candidates.append(p) |
||||||
|
|
||||||
|
if not candidates: |
||||||
|
raise FileNotFoundError( |
||||||
|
f"Could not locate source file for {input_filename} under {base}" |
||||||
|
) |
||||||
|
|
||||||
|
candidates.sort(key=lambda p: p.stat().st_mtime, reverse=True) |
||||||
|
chosen = candidates[0] |
||||||
|
|
||||||
|
rel = chosen.relative_to(Path(storage_root)) |
||||||
|
return str(rel) |
||||||
|
|
||||||
|
def create_video_job(tenant, device_id, input_filename, profile="default"): |
||||||
|
db = get_db() |
||||||
|
|
||||||
|
tenant_row = get_tenant_row(db, tenant) |
||||||
|
if not tenant_row: |
||||||
|
raise Exception(f"Tenant not found: {tenant}") |
||||||
|
|
||||||
|
device_row = get_device_row(db, device_id) |
||||||
|
if not device_row: |
||||||
|
raise Exception(f"Device not found: {device_id}") |
||||||
|
|
||||||
|
tenant_id = tenant_row["id"] |
||||||
|
storage_root = tenant_row["storage_root"] |
||||||
|
device_relative_path = device_row["relative_path"] |
||||||
|
|
||||||
|
source_relative_path = resolve_source_relative_path( |
||||||
|
storage_root, |
||||||
|
device_relative_path, |
||||||
|
input_filename |
||||||
|
) |
||||||
|
|
||||||
|
cur = db.cursor() |
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
INSERT INTO video_jobs ( |
||||||
|
tenant_id, |
||||||
|
device_id, |
||||||
|
source_file_id, |
||||||
|
source_relative_path, |
||||||
|
source_original_filename, |
||||||
|
requested_profile, |
||||||
|
requested_gpu_preference, |
||||||
|
status, |
||||||
|
progress_percent |
||||||
|
) VALUES (%s, %s, NULL, %s, %s, %s, 'auto', 'queued', 0) |
||||||
|
""", |
||||||
|
(tenant_id, device_id, source_relative_path, input_filename, profile) |
||||||
|
) |
||||||
|
db.commit() |
||||||
|
return cur.lastrowid |
||||||
|
|
||||||
|
def list_jobs_for_tenant(tenant): |
||||||
|
db = get_db() |
||||||
|
|
||||||
|
tenant_row = get_tenant_row(db, tenant) |
||||||
|
if not tenant_row: |
||||||
|
return [] |
||||||
|
|
||||||
|
tenant_id = tenant_row["id"] |
||||||
|
|
||||||
|
cur = db.cursor() |
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
SELECT |
||||||
|
id, |
||||||
|
device_id, |
||||||
|
source_original_filename, |
||||||
|
requested_profile, |
||||||
|
status, |
||||||
|
progress_percent, |
||||||
|
assigned_processor, |
||||||
|
output_relative_path, |
||||||
|
error_message, |
||||||
|
created_at, |
||||||
|
started_at, |
||||||
|
completed_at |
||||||
|
FROM video_jobs |
||||||
|
WHERE tenant_id = %s |
||||||
|
ORDER BY id DESC |
||||||
|
LIMIT 100 |
||||||
|
""", |
||||||
|
(tenant_id,) |
||||||
|
) |
||||||
|
|
||||||
|
rows = cur.fetchall() |
||||||
|
|
||||||
|
out = [] |
||||||
|
for r in rows: |
||||||
|
out.append({ |
||||||
|
"id": r["id"], |
||||||
|
"device_id": r["device_id"], |
||||||
|
"filename": r["source_original_filename"], |
||||||
|
"profile": r["requested_profile"], |
||||||
|
"status": r["status"], |
||||||
|
"progress_percent": r["progress_percent"], |
||||||
|
"assigned_processor": r["assigned_processor"], |
||||||
|
"output_relative_path": r["output_relative_path"], |
||||||
|
"error_message": r["error_message"], |
||||||
|
"created_at": str(r["created_at"]) if r["created_at"] is not None else None, |
||||||
|
"started_at": str(r["started_at"]) if r["started_at"] is not None else None, |
||||||
|
"completed_at": str(r["completed_at"]) if r["completed_at"] is not None else None, |
||||||
|
}) |
||||||
|
|
||||||
|
return out |
||||||
|
|||||||
@ -1,6 +1,197 @@ |
|||||||
import time |
import time |
||||||
|
import subprocess |
||||||
|
from datetime import datetime |
||||||
|
from pathlib import Path |
||||||
|
|
||||||
|
from app import create_app |
||||||
|
from app.db import get_db |
||||||
|
from app.services.gpu_select import select_processor, release |
||||||
|
|
||||||
|
INTEL_DEV = "/dev/dri/renderD129" |
||||||
|
AMD_DEV = "/dev/dri/renderD128" |
||||||
|
|
||||||
|
def run_ffmpeg(cmd): |
||||||
|
return subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) |
||||||
|
|
||||||
|
def build_absolute_source_path(db, job): |
||||||
|
with db.cursor() as cur: |
||||||
|
cur.execute( |
||||||
|
"SELECT storage_root FROM tenants WHERE id = %s", |
||||||
|
(job["tenant_id"],) |
||||||
|
) |
||||||
|
tenant_row = cur.fetchone() |
||||||
|
|
||||||
|
if not tenant_row: |
||||||
|
raise RuntimeError(f"Tenant id {job['tenant_id']} not found") |
||||||
|
|
||||||
|
storage_root = tenant_row["storage_root"] |
||||||
|
return str(Path(storage_root) / job["source_relative_path"]) |
||||||
|
|
||||||
|
def process_job(db, job): |
||||||
|
job_id = job["id"] |
||||||
|
src = build_absolute_source_path(db, job) |
||||||
|
profile = job["requested_profile"] |
||||||
|
|
||||||
|
processor = select_processor() |
||||||
|
device = INTEL_DEV if processor == "intel" else AMD_DEV |
||||||
|
|
||||||
|
output = str(Path(src).with_name(Path(src).stem + "_processed.mp4")) |
||||||
|
|
||||||
|
if profile == "portrait_web": |
||||||
|
vf = "format=nv12,hwupload,scale_vaapi=w=720:h=1280:force_original_aspect_ratio=decrease" |
||||||
|
else: |
||||||
|
vf = "format=nv12,hwupload,scale_vaapi=w=1280:h=720:force_original_aspect_ratio=decrease" |
||||||
|
|
||||||
|
if processor in ("intel", "amd"): |
||||||
|
cmd = [ |
||||||
|
"ffmpeg", "-hide_banner", "-y", |
||||||
|
"-vaapi_device", device, |
||||||
|
"-i", src, |
||||||
|
"-vf", vf, |
||||||
|
"-c:v", "h264_vaapi", |
||||||
|
"-b:v", "3M", |
||||||
|
"-maxrate", "3M", |
||||||
|
"-bufsize", "6M", |
||||||
|
"-c:a", "aac", "-b:a", "128k", |
||||||
|
output |
||||||
|
] |
||||||
|
else: |
||||||
|
cmd = [ |
||||||
|
"ffmpeg", "-hide_banner", "-y", |
||||||
|
"-i", src, |
||||||
|
"-c:v", "libx264", |
||||||
|
"-preset", "medium", |
||||||
|
"-crf", "23", |
||||||
|
"-c:a", "aac", "-b:a", "128k", |
||||||
|
output |
||||||
|
] |
||||||
|
|
||||||
|
start = datetime.utcnow() |
||||||
|
|
||||||
|
try: |
||||||
|
result = run_ffmpeg(cmd) |
||||||
|
end = datetime.utcnow() |
||||||
|
|
||||||
|
with db.cursor() as cur: |
||||||
|
if result.returncode == 0: |
||||||
|
rel_output = None |
||||||
|
try: |
||||||
|
with db.cursor() as cur2: |
||||||
|
cur2.execute( |
||||||
|
"SELECT storage_root FROM tenants WHERE id = %s", |
||||||
|
(job["tenant_id"],) |
||||||
|
) |
||||||
|
tenant_row = cur2.fetchone() |
||||||
|
if tenant_row: |
||||||
|
rel_output = str(Path(output).relative_to(Path(tenant_row["storage_root"]))) |
||||||
|
except Exception: |
||||||
|
rel_output = output |
||||||
|
|
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
UPDATE video_jobs |
||||||
|
SET status='complete', |
||||||
|
assigned_processor=%s, |
||||||
|
output_relative_path=%s, |
||||||
|
progress_percent=100, |
||||||
|
started_at=COALESCE(started_at, %s), |
||||||
|
completed_at=%s, |
||||||
|
log_excerpt=%s, |
||||||
|
error_message=NULL |
||||||
|
WHERE id=%s |
||||||
|
""", |
||||||
|
( |
||||||
|
processor, |
||||||
|
rel_output or output, |
||||||
|
start, |
||||||
|
end, |
||||||
|
(result.stderr or "")[:1000], |
||||||
|
job_id, |
||||||
|
), |
||||||
|
) |
||||||
|
else: |
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
UPDATE video_jobs |
||||||
|
SET status='failed', |
||||||
|
error_message=%s, |
||||||
|
log_excerpt=%s, |
||||||
|
completed_at=%s |
||||||
|
WHERE id=%s |
||||||
|
""", |
||||||
|
( |
||||||
|
"ffmpeg failed", |
||||||
|
(result.stderr or "")[:4000], |
||||||
|
end, |
||||||
|
job_id, |
||||||
|
), |
||||||
|
) |
||||||
|
db.commit() |
||||||
|
|
||||||
|
except Exception as e: |
||||||
|
with db.cursor() as cur: |
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
UPDATE video_jobs |
||||||
|
SET status='failed', |
||||||
|
error_message=%s, |
||||||
|
completed_at=UTC_TIMESTAMP() |
||||||
|
WHERE id=%s |
||||||
|
""", |
||||||
|
(str(e)[:1000], job_id), |
||||||
|
) |
||||||
|
db.commit() |
||||||
|
finally: |
||||||
|
if processor in ("intel", "amd"): |
||||||
|
release(processor) |
||||||
|
|
||||||
def run_worker(): |
def run_worker(): |
||||||
print("video worker starting (stub)") |
app = create_app() |
||||||
while True: |
|
||||||
time.sleep(10) |
with app.app_context(): |
||||||
|
print("video worker started", flush=True) |
||||||
|
|
||||||
|
while True: |
||||||
|
try: |
||||||
|
db = get_db() |
||||||
|
|
||||||
|
try: |
||||||
|
db.rollback() |
||||||
|
except Exception: |
||||||
|
pass |
||||||
|
|
||||||
|
with db.cursor() as cur: |
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
SELECT * |
||||||
|
FROM video_jobs |
||||||
|
WHERE status='queued' |
||||||
|
ORDER BY id ASC |
||||||
|
LIMIT 1 |
||||||
|
""" |
||||||
|
) |
||||||
|
job = cur.fetchone() |
||||||
|
|
||||||
|
if job: |
||||||
|
print( |
||||||
|
f"worker picked job id={job['id']} source={job['source_relative_path']}", |
||||||
|
flush=True |
||||||
|
) |
||||||
|
cur.execute( |
||||||
|
""" |
||||||
|
UPDATE video_jobs |
||||||
|
SET status='processing', |
||||||
|
started_at=COALESCE(started_at, UTC_TIMESTAMP()), |
||||||
|
progress_percent=5 |
||||||
|
WHERE id=%s |
||||||
|
""", |
||||||
|
(job["id"],), |
||||||
|
) |
||||||
|
db.commit() |
||||||
|
|
||||||
|
process_job(db, job) |
||||||
|
|
||||||
|
except Exception as e: |
||||||
|
print(f"worker loop error: {e}", flush=True) |
||||||
|
|
||||||
|
time.sleep(5) |
||||||
|
|||||||
@ -0,0 +1,137 @@ |
|||||||
|
{% extends "portal_base.html" %} |
||||||
|
|
||||||
|
{% block title %}Video Workshop - OTB Cloud{% endblock %} |
||||||
|
|
||||||
|
{% block portal_content %} |
||||||
|
|
||||||
|
<style> |
||||||
|
#profile { |
||||||
|
background: #1e293b; |
||||||
|
color: #e5e7eb; |
||||||
|
border: 1px solid rgba(255,255,255,0.18); |
||||||
|
} |
||||||
|
#profile option { |
||||||
|
background: #1e293b; |
||||||
|
color: #e5e7eb; |
||||||
|
} |
||||||
|
</style> |
||||||
|
|
||||||
|
<div class="portal-page-header"> |
||||||
|
<div> |
||||||
|
<h1 class="portal-page-title">Video Workshop</h1> |
||||||
|
<p class="portal-page-subtitle">Device ID: <strong>{{ device_id }}</strong></p> |
||||||
|
</div> |
||||||
|
<div class="portal-toolbar" style="display:flex;gap:10px;flex-wrap:wrap;"> |
||||||
|
<a class="portal-btn" href="/devices/{{ device_id }}/files">Back to Device Files</a> |
||||||
|
<a class="portal-btn" href="/portal">Back to Portal</a> |
||||||
|
</div> |
||||||
|
</div> |
||||||
|
|
||||||
|
<div class="service-card" style="margin-top:18px;"> |
||||||
|
<div class="service-card-header"> |
||||||
|
<div> |
||||||
|
<h2>Queue Video Jobs</h2> |
||||||
|
<p>Selected files from the device browser are staged in your browser and can now be queued for processing.</p> |
||||||
|
</div> |
||||||
|
<div> |
||||||
|
<span class="service-badge service-badge-beta">alpha3-a</span> |
||||||
|
</div> |
||||||
|
</div> |
||||||
|
|
||||||
|
<div class="service-card-body" style="display:flex;flex-direction:column;gap:16px;"> |
||||||
|
<div> |
||||||
|
<label for="profile"><strong>Profile</strong></label><br> |
||||||
|
<select id="profile" class="portal-input" style="max-width:320px;margin-top:8px;"> |
||||||
|
<option value="default">Default</option> |
||||||
|
<option value="compress">Compress</option> |
||||||
|
<option value="hq">High Quality</option> |
||||||
|
</select> |
||||||
|
</div> |
||||||
|
|
||||||
|
<div> |
||||||
|
<strong>Selected items</strong> |
||||||
|
<pre id="selected-files" style="white-space:pre-wrap;background:rgba(255,255,255,0.04);padding:12px;border-radius:12px;overflow:auto;min-height:80px;"></pre> |
||||||
|
</div> |
||||||
|
|
||||||
|
<div style="display:flex;gap:10px;flex-wrap:wrap;"> |
||||||
|
<button class="portal-btn primary" type="button" onclick="processWorkshop()">Process</button> |
||||||
|
<button class="portal-btn" type="button" onclick="loadJobs()">Refresh Jobs</button> |
||||||
|
<button class="portal-btn" type="button" onclick="clearWorkshopSelection()">Clear Selection</button> |
||||||
|
</div> |
||||||
|
</div> |
||||||
|
</div> |
||||||
|
|
||||||
|
<div class="service-card" style="margin-top:18px;"> |
||||||
|
<div class="service-card-header"> |
||||||
|
<div> |
||||||
|
<h2>Jobs</h2> |
||||||
|
<p>Live queue/status feed for this tenant.</p> |
||||||
|
</div> |
||||||
|
</div> |
||||||
|
<div class="service-card-body"> |
||||||
|
<pre id="jobs" style="white-space:pre-wrap;background:rgba(255,255,255,0.04);padding:12px;border-radius:12px;overflow:auto;min-height:140px;"></pre> |
||||||
|
</div> |
||||||
|
</div> |
||||||
|
|
||||||
|
<script> |
||||||
|
function getWorkshopSelection() { |
||||||
|
try { |
||||||
|
return JSON.parse(localStorage.getItem("videoSelection") || "[]"); |
||||||
|
} catch (e) { |
||||||
|
return []; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
function renderWorkshopSelection() { |
||||||
|
const files = getWorkshopSelection(); |
||||||
|
document.getElementById("selected-files").textContent = |
||||||
|
files.length ? JSON.stringify(files, null, 2) : "No files currently staged."; |
||||||
|
} |
||||||
|
|
||||||
|
function clearWorkshopSelection() { |
||||||
|
localStorage.removeItem("videoSelection"); |
||||||
|
renderWorkshopSelection(); |
||||||
|
} |
||||||
|
|
||||||
|
function processWorkshop() { |
||||||
|
const files = getWorkshopSelection(); |
||||||
|
if (!files.length) { |
||||||
|
alert("No files staged for workshop."); |
||||||
|
return; |
||||||
|
} |
||||||
|
|
||||||
|
fetch("/api/video/enqueue", { |
||||||
|
method: "POST", |
||||||
|
headers: {"Content-Type": "application/json"}, |
||||||
|
body: JSON.stringify({ |
||||||
|
device_id: {{ device_id }}, |
||||||
|
files: files, |
||||||
|
profile: document.getElementById("profile").value |
||||||
|
}) |
||||||
|
}) |
||||||
|
.then(r => r.json()) |
||||||
|
.then(d => { |
||||||
|
document.getElementById("jobs").textContent = JSON.stringify(d, null, 2); |
||||||
|
loadJobs(); |
||||||
|
}) |
||||||
|
.catch(err => { |
||||||
|
document.getElementById("jobs").textContent = "Enqueue failed: " + err; |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
function loadJobs() { |
||||||
|
fetch("/api/video/jobs") |
||||||
|
.then(r => r.json()) |
||||||
|
.then(d => { |
||||||
|
document.getElementById("jobs").textContent = JSON.stringify(d, null, 2); |
||||||
|
}) |
||||||
|
.catch(err => { |
||||||
|
document.getElementById("jobs").textContent = "Job load failed: " + err; |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
renderWorkshopSelection(); |
||||||
|
loadJobs(); |
||||||
|
setInterval(loadJobs, 3000); |
||||||
|
</script> |
||||||
|
{% endblock %} |
||||||
Loading…
Reference in new issue