Infinity Bots - سجل التاريخ

أداء متدهور جزئيًا

96% - محدث

Production - أداء متدهور

96% - محدث
أبريل 2025 · 100.000%مايو · 93.9135%يونيو · 93.4766%
أبريل 2025
مايو 2025
يونيو 2025

Development - جاهز للعمل

97% - محدث
أبريل 2025 · 100.000%مايو · 96.7115%يونيو · 94.9102%
أبريل 2025
مايو 2025
يونيو 2025

Documentation - جاهز للعمل

99% - محدث
أبريل 2025 · 100.000%مايو · 100.000%يونيو · 95.5282%
أبريل 2025
مايو 2025
يونيو 2025

Staff Panel - جاهز للعمل

97% - محدث
أبريل 2025 · 100.000%مايو · 95.4099%يونيو · 95.3602%
أبريل 2025
مايو 2025
يونيو 2025

Widgets - جاهز للعمل

93% - محدث
أبريل 2025 · 100.000%مايو · 83.6022%يونيو · 95.0940%
أبريل 2025
مايو 2025
يونيو 2025
94% - محدث

Production - أداء متدهور

91% - محدث
أبريل 2025 · 100.000%مايو · 89.1873%يونيو · 85.2706%
أبريل 2025
مايو 2025
يونيو 2025

Development - أداء متدهور

96% - محدث
أبريل 2025 · 100.000%مايو · 94.1470%يونيو · 94.8889%
أبريل 2025
مايو 2025
يونيو 2025
100% - محدث

Cloudflare → Always Online - جاهز للعمل

Cloudflare → DNS Firewall - جاهز للعمل

Cloudflare → DNS Root Servers - جاهز للعمل

Cloudflare → DNS Updates - جاهز للعمل

Cloudflare → Firewall - أداء متدهور

Cloudflare → Gateway - جاهز للعمل

Cloudflare → Network - جاهز للعمل

Cloudflare → Pages - جاهز للعمل

Discord → API - جاهز للعمل

Discord → Gateway - جاهز للعمل

Github → Actions - جاهز للعمل

Github → API Requests - جاهز للعمل

Github → Git Operations - جاهز للعمل

Github → Issues - جاهز للعمل

Github → Pull Requests - جاهز للعمل

Github → Webhooks - جاهز للعمل

Ionos → Cloud Backup - جاهز للعمل

Ionos → Cloud Server - جاهز للعمل

Stripe → Stripe API - جاهز للعمل

فيرسيل → يبني - جاهز للعمل

Vercel → Build & Deploy - جاهز للعمل

Vercel → DNS - جاهز للعمل

سجل التاريخ

يونيو 2025

Server Downtime
  • بعد الموت
    بعد الموت

    📝 Postmortem: June 2025 Service Outage

    Incident Duration:

    June 16, 2025 – June 21, 2025

    Status: Resolved

    Root Cause: Misconfigured infrastructure and networking components

    📌 Summary

    Between June 16 and June 21, 2025, our services experienced a prolonged and critical disruption. This impacted system accessibility, network stability, and overall deployment reliability. The root causes were traced back to multiple misconfigurations within our new infrastructure stack, primarily involving our Dokploy instance, networking setup, and reverse proxy (Traefik).

    ⚙️ Technical Cause

    Upon investigation, we identified several compounding issues:

    • Misconfigured Dokploy Instance: The initial deployment lacked critical network isolation and routing configurations, leading to service timeouts and container miscommunication.

    • Traefik Reverse Proxy: Misconfigured routing and TLS handling caused failed ingress connections and prevented external traffic from reaching internal services.

    • Networking Setup Errors: Overlapping subnets and improperly bridged networks led to intermittent connectivity between deployer and host machines, further destabilizing the system.

    • Missing Health Checks: Some containers were not being monitored properly, which delayed automatic restarts and extended service downtime.

    🚑 Immediate Actions Taken

    • Isolated deployer and host networks to stabilize inter-service traffic.

    • Corrected routing rules and middleware configuration in Traefik.

    • Rebuilt the Dokploy configuration with clearer network separation and improved error handling.

    • Re-enabled and audited health checks across services.

    • Conducted live testing and verification to ensure full service restoration by June 21, 2025.

    ✅ Resolution and Recovery

    The system was gradually stabilized beginning on June 16, with partial access restored within 30 minutes of our first major fix. However, additional network-level issues prolonged the resolution timeline. By June 21 at 3:35 AM, all services were fully restored and verified functional.

    📚 Lessons Learned

    • Configuration reviews must be enforced before production deployment of new infrastructure tools.

    • Network planning (IP ranges, bridges, proxies) needs to be documented and peer-reviewed.

    • Critical systems (like DNS routing, ingress, and orchestration layers) must have dedicated monitoring and rollback plans.

    🔧 Preventative Measures

    • Implement automated preflight checks in our deployment pipelines.

    • Schedule recurring audits of proxy and ingress configurations.

    • Build fallback container orchestration playbooks for Dokploy-based deployments.

    • Expand post-deploy smoke testing to catch network-level regressions earlier.

    🗣️ Final Note

    We sincerely apologize for the extended downtime and the impact it had on your experience. While our intention was to modernize our infrastructure, we recognize that our transition planning and oversight fell short. This will be addressed internally, and improvements are already underway.

    Thank you for your patience and continued support.

  • تم الحل
    تم الحل
    This incident has been resolved.
  • تحديث
    تحديث

    We apologize that these issues have been ongoing for so long, our team is still working to resolve the issues! And properly configure our new dokploy network!

  • تحديث
    تحديث

    Some additional issues have arised and we are working on a fix

  • تحديث
    تحديث

    We have isolated networks between our deployer and host machine to help stable out its long term usage, our team is currently finishing up with the final setup stages. Services should start being restored within the next 20 - 30 mins.

  • محدد
    محدد

    We have identified this issue is due to Dokploy. We are working resolving this issue now.

مايو 2025

أبريل 2025

لم يتم الإبلاغ عن أي إشعارات هذا الشهر

أبريل 2025 ألى يونيو 2025

التالي