Build cluster machine donations wanted

Got any spare multicore xeon or better boxes with tons of ram and disk?

When the build for openwrt breaks, everyone suffers. Development for openwrt and subsidiary projects such as cerowrt, dd-wrt, gargoyle, buffalo, etc - comes to a screeching halt. Worse when builds break it tends to be a period of high volume of commits. Breakage can sometimes take days to resolve, as individual builds can take as long as 20 hours to build serially on non-optimized hardware (the record on top of the line stuff is about 10 hours). During the breakage, dozens of developers twiddle their thumbs and have to improvise around the problems in order to get their individual tasks done.

So although the openwrt organization maintains a buildbot cluster, it is sorely under-sized, and always in need of expansion and love.

ISC has set aside a portion of rack space for us, but we lack machines to fill it. ISC has also loaned 3 machines to the effort and Dave Taht (after being crippled one too many times by build breakage) donated his top end 12 core desktop to the effort, too. Even with these donations in play, much better results could be obtained by 10 or more high end machines in the cluster, reducing cycle time from 3 or more days to under a day. Presently there are about 4 other machines loaned by individuals from their homes, in the cluster, too.

  • Hardware Requirements

Doing software builds is a very compute and disk intensive task. The ROI on doing compute on an EC2 instance is much worse than doing it on a string of dedicated nearly-top-of-the-line boxes.

If you have a decent machine you can loan, even part time to the effort, please contact travis kemen (thepeople on irc, and thepeople AT openwrt.org). However a celeron won’t cut it!

An ideal box has at least 4 cores, 16GB of ram, and 2TB of disk. An SSD is highly desirable, and with more ram, the bulk of a build can be done in memory. Also, as the end result is over a gb of files that need to be uploaded, reasonable upload bandwith is useful.

If we did sane cost accounting for electricity and rack space, a single 64GB box with 12 cores is far more cost effective than 3 boxes with 16GB ram and 4 cores each, but we don’t, and it’s the up-front capital expense here, not the ongoing expense, that is making this part hard. (we’ve been trying to find a suitable donor for hardware for over a year)

(We’re not allergic to donated compute time on a cluster instance, either, it’s just not cost-effective from our perspective to actually buy any ourselves.)

To edit this page, submit a pull request to the Github repository.
RSS feed

Recent Updates

Oct 20, 2023 Wiki page
What Can I Do About Bufferbloat?
Dec 3, 2022 Wiki page
Codel Wiki
Jun 11, 2022 Wiki page
More about Bufferbloat
Jun 11, 2022 Wiki page
Tests for Bufferbloat
Dec 7, 2021 Wiki page
Getting SQM Running Right

Find us elsewhere

Bufferbloat Mailing Lists
#bufferbloat on Twitter
Google+ group
Archived Bufferbloat pages from the Wayback Machine

Sponsors

Comcast Research Innovation Fund
Nlnet Foundation
Shuttleworth Foundation
GoFundMe

Bufferbloat Related Projects

OpenWrt Project
Congestion Control Blog
Flent Network Test Suite
Sqm-Scripts
The Cake shaper
AQMs in BSD
IETF AQM WG
CeroWrt (where it all started)

Network Performance Related Resources


Jim Gettys' Blog - The chairman of the Fjord
Toke's Blog - Karlstad University's work on bloat
Voip Users Conference - Weekly Videoconference mostly about voip
Candelatech - A wifi testing company that "gets it".