Author Archives: Chris Sutton

Billing System

One thing I have not touched on yet is what we use for billing our members, accounting, etc.

My day job is doing software development (, and one of the products I created is software to run community foundations.  This includes a full blown online fund accounting system, credit card processing, customers, donors, AR, AP, checks, etc, etc.

So, I already had software we could use to keep the books, all I needed to add was an “ISP” module that tracked the different radio’s, determined if a radio was backbone equipment, or member equipment.


The DBIUA Network module keeps track of everything, helps with the programming of the radio’s, and also feeds the nagios monitoring system.  When we add someone new to the DBIUA Network module, this automatically gets pushed out to nagios.


This system tracks the location of everything as well (latitude/longitude), and so we can also spit out a physical map of the network and how everything is connected.


The best part of this is really the automatic billing each month.  My software is already setup to integrate with Stripe payment processing (, and so we have a page where members can login, and give us a credit card.  This is not saved in our system, but instead is saved over at Stripe.  At the first of each month, I press a button that automatically creates invoices for everyone.  Then another button charges everyones credit card using the data saved at Stripe.

I’m sure some of you are thinking, this would be really cool to use for your own WISP.  Maybe you are using Quickbooks or something and your monthly billing is a total PITA.  For now, you will have to continue to suffer, but maybe in the future I will figure out how to make this available to others as well, but right now it is very DBIUA centric.


Line Loss

In my prior post, I talked about troubleshooting power issues.  And this past week we had another problem with out system that turned out to be power related again.

At our relay points, one of the difficulties is figuring out where the closest AC power is, and how to get from there to where we need our radios to live.  Generally this distance is not very far.  But in a couple cases it’s a long ways, like with the relay point in the middle of Tom’s field.  In this case we trenched the 120v AC power all the way to the relay point.

In another case over at Jim Nelson’s point, we had to run 200 feet from the closest house, out to a tree on a point, and instead of bringing 120v AC out to the tree, we ran the power over POE the 200′.

At first we only had 2 radios at this relay point, and everything worked really well.  Eventually we added a 3rd radio, and most recently we added a webcam.  Adding the webcam pushed everything over the edge and suddenly everything at that relay point was rebooting over and over and over.  WTF?

Well, there is this thing call line loss, when transmitting power over certain distances.  Here is a great little webpage and helps you with those calculations.

So, we are transmissiong 24v DC power, 200′, and we are using Ubiquity Carrier Class ethernet cable, which has 24 AWG wire.  And POE uses 2 pairs to carry the power.

The last thing we need to fill in is amps.  When we had 2 radios, each using 8 watts, that means about 0.4 amps (8 watts/24 volts) per radio.

Plugging all this into our calculator shows we end up with just short of 20 volts at our radios.

Adding the 3rd radio (and amps up to 1.2), we fall to just shy of 18 volts, and adding the webcam we are under 16 volts, at which point things obviously started failing, probably the touchswitch that all this was plugged into.

So, our solution was to turn off the webcam to get everything running again, and then to order a bunch of 12/2 outdoor landscape lighting cable, and use that to bring the 24v power out to the relay point.  Plugging in 12 AWG wire into our calculator across 200′ and 2.5 amps (the max the power supply will put out), gives us 22 volts at the relay point.

So, if you are running more then a couple of radios across a long distance POE link, you really need to do something different for power because that tiny 24 AWG wire just doesn’t cut it for high power needs.


Computers and the networks they us to communicate with each other are complex things, and from time to time, things stop working, or don’t work as well as they used to, and you have to figure out why.  Troubleshooting is a bit of an art, and in this post, I will go over various troubleshooting stories, and how to try and avoid rebuilding things from scratch when you just forgot to check a checkbox on some configuration screen.

The most important part of troubleshooting is knowing there is a problem in the first place, and having as much information as possible to help figure out what is wrong. You need a system that is checking your entire network and can alert you when something goes wrong. We have installed Nagios, which is an open source network watchdog program. This checks all our backbone radios and routers every 5 minutes. Individual member endpoints are checked every 15 minutes. So, if a radio goes down, Nagios will alert us to this fact when it happens.

When you are alerted to a problem, the next useful piece of information is to be able to see what may have been happening in the past. And for this we have another system installed called Cacti, which is another open source data logging program. This system checks every router, radio and member router every 5 minutes, and logs how long it takes to contact (ping time), how long the system has been up (uptime), if it’s a radio, it records the signal and noise values as well as raw bitrate. It also records how much data has been transfered recently (bandwidth). All of these metrics are very useful in troubleshooting any problems. And without these two systems, we would really be working in the dark when something went wrong.

Start with the basics

When we see a radio go down, the first thing we first try and confirm if there is power to the radio. Lack of power is way more common than many people realize.

We have battery backups on all our backbone radios, so sometimes the power loss happened several hours before. One case is the infamous sheep. When we were first building things out and installing one of our relay points in the middle of Tom’s field, we had a temporary extension cord running out into the field to power our relay point. In the middle of the night we got an alert that “tillman-pb-5-a” was down. In the morning we went out to the field, and found that the extension cord was unplugged, and apparently a sheep had scratched up against it during the day and unplugged it, and so then about 8 hours later, the battery died, and the radio went offline.

Another time, we got an alert that “shipstad-ar” was down. This is a member wifi router, and I knew that this was probably not on a battery backup. The rest of the equipment at the Shipstad’s was still up, but my gut said it was now on battery backup. In calling to the Shipstad’s house, I found out that they were doing something in the garage and had popped a breaker turning on a heater or something. This had turned off the wifi router, and this had also stopped sending POE power to the rest of the relay point equipment at that location. It is this type of event where we are working on being able to monitor grid power at our relay points, so we know if there is a power problem and we are on battery backup.

Sometimes the power problems are not lack of power, but not enough (meaning not enough amps).

Early on in building out our network, I put 3 radios up in one of my trees, two nanostations, and one rocket. To make life easy, I ran one wire up the tree. Nanostations have 2 network plugs and allow you to daisy chain another device. Down on the ground, I tested running one POE cable into a nanostation, then on the secondary port, running another cable to the 2nd nanostation, then from the secondary port on that radio, into the rocket. Everything turned on and lit up and I was able to login to each radio on the one wire.

But, when it all went up in the tree, and we started running traffic over the radios, everything started rebooting. Well, turns out you can only daisy chain once, and when the radio started sending traffic it started pulling more amps than was available, and so things rebooted. So, the lesson on that was to run one wire for each radio up the tree.

But, there was another location where we only needed to have 2 radios in the tree and this worked well, even under load. We needed to add a 3rd radio in that location, which meant we needed to install a touchswitch, instead of running the 2 radios directly from the Tycon Power charge controller directly. So, we switched the 2 radios to one port on the toughswitch, and the 3rd radio to another toughswitch port. Then we started to get random alerts that the 2 radios where going down. Turns out you can’t run a nanostation chained to another radio off a single toughswitch port. So, we ran a dedicated wire to each radio and each had their own port on the toughswitch.

After you have made sure there is power to a location, you next should check that there is not a problem with the physical wire that carries the power to the radio.

We had one time when a radio went offline, and what happened was a small branch came down, and must have hit the ethernet cable, and the cable was not completely “clicked” into the network port on the back of the radio, and so the cable popped out.

There were several times when even though the radio had power lights and was connecting upstream, downstream on the network packets were not flowing. This was usually a problem with the crimping of the cat5 end. I had one of these recently, and even though I have done hundreds of these end crimping connections, every once in a while I’m not paying attention, it’s a little dark, or i’m taking to someone, and one wire goes in the wrong place. Then things either don’t work, or they half work.

One time we had a faulty POE power brick, where the little wires in the brick that are sprung to connect tightly to the cable end, where stuck, and only completed the connection if you pushed the cable end into the POE brick really hard. This was the intermittent power problem.

Then we just had another case where a member had a faulty powerstrip, that if you touched it wrong, it would turn off the power.

After confirming all the power related issues, we then get to the programming of the radios….which will be another post because this one has gotten really long.

Equipment Box

One of the tricky parts of creating the dbiua network was getting the relay points setup, with a small POE switch, and battery backup.

We initially used Tycon Power Systems UPS-DC1224-9 system.  This was a great little box that had a 9AH battery and a charge controller, and just enough room to get a Ubiquity Toughswitch5 in there.


But the 4 allenhead screws were not the best and after being out in the weather for a while, they would break.  So, we switched things out and used all the same parts (9AH battery, TP-SCPOE-1214 charge controller, and toughswitch), but in a Hana Wireless NEMA 12x10x5 fiberglass box.  But, the biggest problem was mounting all the stuff in the Hana box efficiently.

What I ended up doing was designing a bracket to hold the battery up at the top, and then 2 brackets to hold the toughswitch above the charge controller.  I 3D printed both of these things (the blue things in the picture below).


I have uploaded the design of both of these brackets to Shapeways, so if you want them for your own project and do not have a 3d printer, you can just order them from Shapeways.

You will need 2 toughswitch9 and 1 batterybox12.  The prices for both of these are not marked up at all.

One last part that you need is a POE splitter (Tycon Power Systems POE-SPLT-S
Passive Splitter 5.5/2.1mm DC Shielded):

This allows you to pull the power off the charge controller POE out port, and power the toughswitch.  In the picture above you can see one end of this splitter plugged into the first port of the switch.  The rest of the splitter is under the switch.

Our latest version of this equipment box includes a RaspberryPI (designed by Brett Marl), that monitors the power of the battery, as well as doing internal speed tests on our network.

IMG_20151128_151042 (1).jpg


Radios we use

In a previous post I mentioned the wifi routers we install in members homes.  These are the last part of the network.  So what about everything else?


We use a variety of radios depending on the situation.  For PTP links we generally use PowerbridgeM5 on both ends, though sometimes we have a PowerbridgeM5 on the upstream side, and a NanoStationM5 on the downstream side.

For one PTP link from the water tank, we have a RocketM5 with 30dBi 2′ dish, and a NanoStationM5 on the other end.  The NanoStationM5 is up at the top of a tree, and sways around a bit.

For our PTMP (Point to Multipoint) links, we always have a RocketM5 with either a 16 or 19 dBi sector, or sometimes a 13dBi omni antenna.  On the downstream side we use either NanoStationM5, or NanoLocoM5, or NanoBeamM5-16, or sometimes for a long link, a NanoBridgeM5-22.  The NanoBeamM5-16 are really great little radios as they have a nice integrated mounting bracket.


Some of our links use 900mhz, and for these we will use a RocketM900, with a 13dBi sector (what I call the water heater, because the thing is huge).  Then on the downstream side we have NanoLocoM900.  Sometimes we will use a NanoLocoM900 on both sides of the link if the link is not going very far, or there is just one client radio downstream.


In one location we are using 3.65 Ghz (which requires an FCC license).  Here we have a RocketM365 with a 12dBi Omni, and then NanoStationM365 for downstream radios.

And in another location we are using 2.4Ghz, and a RocketM2 with a 16dBi sector and NanoStationM2 downstream.

These are all part of Ubiquity’s AirMax line.  We are not using an AC radios yet, but we might switch to some of these on our backhaul links in the future.

My Favorite Ubiquity Wifi Router

Our standard member install usually (always), has one radio outside the house.  This is some form of their airmax line.  We sometimes also provide a 2.4ghz wifi router for the member inside the house.  This is not always required if they already have a wifi router.

In order to make things easy, we decided to standardize on a Ubiquity product here as well.

In the very beginning, we used their basic AirRouter.


We then started using the AirRouter-HP, which has a better range, and is powered by POE.  it is a little more expensive.

AirRouter-HP .jpg

Then I ran across their AirGateway, which is this tiny little thing that plugs directly into the POE brick.  They also had an AirGateway-LR, which was similar to the AirRouter-HP.


After doing some installs with these, I decided I liked the AirRouters better, specifically the AirRouter-HP.  Why?  Not totally sure, but they seem more tried and true, and you have the ability to plug more than one thing into them if needed.

An Un throttled experiment

Probably all Internet services give people a choice of either speed or data caps.

The DBIUA decided to see what would happen without imposing either of these on its members.

We got the fastest upstream connection that we could, and then we built our wireless network out and tried to provide the fastest speeds possible in an affordable manner.

In order to test the speeds on our internal network, we installed a speed test mini webpage on a server at the water tank.  This allows people to test speeds to the tank.

If you are one or two hops from the tank, you can probably get upload and download speeds in the 40+mpbs range.  If you are several hops away, then that tends to drop to the 20’s.  And some places it’s around 10.  But, that’s a far cry from the 1.5mbps that you sometimes got with DSL.

So, what does our overall bandwidth usage look like going out of the tank to the internet?


This is a snapshot of 2 days.  Notice the spike in the evenings.  So even though people may be able to download faster, the reality is they don’t consume that much, and it’s only generally in the evening.  Here is another graph that shows this same traffic over a weeks time.


So, what about individual usage?  Not all ISP’s graph this data, but we decided to do this so we could manage the network and identify any issues.  I think those ISP’s that might do this, would probably not share this information, because it shows we actually use WAY less than we think we do.

Here is the usage graph for the connection from my house for that same 2 day period:


Notice the scale on the left.  3.0.  Not 30.0 like the above graph.  And the usage is way less.  The larger blob at the far right is watching some video.  The little spikes are various downloads.  General web browsing, or youtube at lower rez is the other green blips.

Here is another person, with a different usage pattern:


Notice the scale change again.  This is someone streaming something high def (the large green blobs).  But, there are still large swaths of time when nothing is happening.

Here are a few more



All of these individual usage patterns flow together to create the one at the top.  We have not had to throttle anyone, or impose data caps.  We allow everyone to use what is available at any given time on the network.

There have been times when a lot of people were streaming something around the same time, and guess what happened?  Things slowed down a little bit for everyone.  Sometimes there may have been a little buffering, but in general it has not been a problem.

The million dollar question is how much speed do you really need?

In my opinion, if you have a reliable 2-3mbps available to you, that is plenty.  If you can burst to faster speeds as needed, that is an extra bonus.  6-8mbps means you can stream very high def video.  But using 50 or 100mbps for long periods of time is actually not very common.

And, personally I have had times when slow speeds are not on our end, but instead on the other end of the connection, at the data center side, where a webserver might be throttled, or on a slow connection.

So, hopefully this gives you all some good real world information about our little socialist network experiment 🙂