Why HaBangNet DNS system is 100% Reliable and Fast?

HaBangNet is using Anycast DNS Setup, is a network addressing and routing methodology in which datagrams from a single sender are routed to the topologically nearest node in a group of potential receivers, though it may be sent to several nodes, all identified by the same destination address.

Currently HaBangNet is build from 5 different location server.

2 USA, 1 Asia, 1 UK and 1 Germany.

Addressing methods

The Internet Protocol and other network addressing systems recognize five main addressing methodologies:

  • Anycast addressing uses a one-to-nearest association, datagrams are routed to a single member of a group of potential receivers that are all identified by the same destination address.
  • Broadcast addressing uses a one-to-many association, datagrams are routed from a single sender to multiple endpoints simultaneously in a single transmission. The network automatically replicates datagrams as needed for all network segments (links) that contain an eligible receiver.
  • Multicast addressing uses a one-to-unique many association, datagrams are routed from a single sender to multiple selected endpoints simultaneously in a single transmission.
  • Unicast addressing uses a one-to-one association between destination address and network endpoint: each destination address uniquely identifies a single receiver endpoint.
  • Geocast refers to the delivery of information to a group of destinations in a network identified by their geographical locations. It is a specialized form of Multicast addressing used by some routing protocols for mobile ad hoc networks.


Anycast allows any operator whose routing information is accepted by an intermediate router to hijack any packets intended for the anycast address. While this at first sight appears insecure, it is no different from the routing of ordinary IP packets, and no more or less secure. As with conventional IP routing, careful filtering of who is and is not allowed to propagate route announcements is crucial to prevent man-in-the-middle or blackhole attacks. The former can also be prevented by encrypting and authenticating messages, such as using Transport Layer Security, while the latter can be frustrated by onion routing.


Anycast is normally highly reliable, as it can provide automatic failover. Anycast applications typically feature external “heartbeat” monitoring of the server’s function, and withdraw the route announcement if the server fails. In some cases this is done by the actual servers announcing the anycast prefix to the router over OSPF or another IGP. If the servers die, the router will automatically withdraw the announcement.

“Heartbeat” functionality is important because, if the announcement continues for a failed server, the server will act as a “black hole” for nearby clients; this failure mode is the most serious mode of failure for an anycast system. Even in this event, this kind of failure will only cause a total failure for clients that are closer to this server than any other, and will not cause a global failure.

Local and global nodes

Some anycast deployments on the Internet distinguish between local and global nodes to benefit the local community, by addressing local nodes preferentially. An example is the Domain Name System. Local nodes are often announced with the no-export BGP community to prevent hosts from announcing them to their peers, i.e. the announcement is kept in the local area. Where both local and global nodes are deployed, the announcements from global nodes are often AS prepended (i.e. the AS is added a few more times) to make the path longer so that a local node announcement is preferred over a global node announcement.

All HaBangNet Webhosting is covered by our Anycast DNS.

What is High Availability?

High Availability is a term used to describe the procedures, infrastructure, and system design to ensure a specified level of accessibility to your server. Accessibility requires both power and network connectivity as well as a functional server. If one or all of these requirements are compromised, it is said to be unavailable. This level of availability is most often specified in a Service Level Agreement (SLA). Usually a set amount of credits is issued if the provider were to fail to meet the agreement. The amount of credits as well as the level of availability can vary from provider to provider. The typical metric used to describe a high availability service is in percentage of availability.

A table showing the amount of downtime allowable based on a typical availability percentage is shown below.

Availability % Downtime per year Downtime per month* Downtime per week
98% 7.30 days 14.4 hours 3.36 hours
99% 3.65 days 7.20 hours 1.68 hours
99.5% 1.83 days 3.60 hours 50.4 minutes
99.8% 17.5 hours 86.2 minutes 20.1 minutes
99.9% (“three nines”) 8.76 hours 43.2 minutes 10.1 minutes
99.99% (“four nines”) 52.6 minutes 4.32 minutes 1.01 minutes
99.999% (“five nines”) 5.26 minutes 25.9 seconds 6.05 seconds
99.9999% (“six nines”) 31.5 seconds 2.59 seconds 0.605 seconds

* Month calculation is based on a standard 30 day calendar month.

As you can see the difference between 99.9% (3 nines) and 99.99% (4 nines) is quite significant. Most business can live with a chance of a 1 minute per week of downtime but when you start to gamble with 10 minutes of downtime every week that may come during peak hours, you could be putting your business in significant financial risk. With each extra 9, you cut your downtime by 10 times the original amount.

What does this mean for your server?

It can mean that not all “high availability” services are equal. The term is used widely for various levels of availability so it is crucial that you ask your provider exactly what percentage you’re paying for, as well as the steps in place to ensure that level of availability is met. Depending on the SLA, you can obtain service credits when a availability agreement is not fulfilled but most companies would much rather have their servers up then get service credits so it is important to ask several questions about their high availability environment before entering into a contract.

What is a High Availability Environment?

A high availability environment is the infrastructure and procedures put into place to ensure a high level of availability. This is usually accomplished by setting up an environment that includes no single points of failure. What does this mean? It means that if one aspect of the architecture were to fail, there is an additional connection in place to be used, and therefore no disruption to the accessibility of the server. It also means that multiple things must go wrong in order for a server to lose availability and therefore greatly decreasing the chances of downtime. Redundant power supply and redundant network connections are a must for certification of a top tier data center and for a high availability configuration. This will ensure that power and network connectivity are provided with a very low chance of interruption.

How does this work?


This is a configuration that HaBangNet uses and as you can see, there are a lot of things going on. On the power side, there are two separate, independent power runs from the server to the utility power source and backup generators are in place to deliver power to two separate power supplies on the server. On the network side, two core routers are fed from multiple Internet Service Providers and cross-messed between both routers and network access switches. It also should be noted that it is important that your network connections have multiple entry points to your data center and that each ISP is on a separate fiber to further mitigate the risk of downtime. This is just one “high availability” configuration, but a very good one at ensuring reliable access to both power and internet connectivity.

If your interested in more information about this particular configuration, you can read this white paper on The Anatomy of a High-Availability Rack.

In addition to power and network redundancy, a high availability environment can be further protected from loss of availability by protecting against server-side failures. This is often referred to as a high availability cluster which can be paired with load balancing as well for a higher performing configuration. This is done very similarly to high available power and network configurations but having a redundant server connected as well. This cluster configuration can recognize a hardware or software fault in the server and failover to the redundant server without an interruption in service. Load balancing, taking advantage of the high availability cluster, can distribute your application’s workload evenly or asymmetrically (if configured that way) between two or more servers to help increase performance.

An example of a simple cluster configuration (a node 2 cluster) can be seen below.


A data center technician can work with you to configure a server environment that will suit your particular needs. This type of configuration is ideal for servers that can’t afford downtime even when it is scheduled maintenance downtime. Preventative maintenance is critical to limiting server-side downtime but unfortunately, downtime is normally required for the maintenance to be performed. A high availability cluster enables your service to be available even during maintenance.

The last issue to tackle when talking about high availability hosting is what happens when a catastrophic disaster (whether natural like a fire, flood, earthquake or tornado or a man-made disasters like human error accidents, burglaries, and even war-related attacks) strikes.

Managing the risk of a disaster in a high availability configuration.

If a disaster were to strike like a massive earthquake and your data center and server(s) were damaged, it wouldn’t really matter if there was a high availability configuration to your server because of the multiple failure points that usually coincide with a major disaster. That is why it is important, as apart of your disaster recovery plan, to at least have your data backed up and in some cases, consider replication services so that your data is continually replicated to an off-site server and can be accessed in the event of a disaster. Connectivity across multiple data centers can add that additional level of availability if a disaster were to strike. On-site as well as off-site replication are options that you should consider when selecting a high availability host. It is important to note that something like disk mirroring and replication services when fully synchronized are not the same thing as disk backup. These services don’t protect against things like accidental deletion or human-error types of data loss. They protect against disk failure or server failure. Setting up regular online or tape backup procedures is an important consideration, in addition to data replication, to protect against disasters.


As you can see, there are many things to consider when choosing a high availability host and depending on your application and your budget there are various levels of protection against downtime that you can choose from. This article was meant to give a brief overview of the topic of high availability hosting and the importance of knowing which types of redundancy are in place for your server. If you are interested in more detailed descriptions of the options available for your high availability configuration see the links below.

See Also

What is CDN? And why you don’t need it at HaBangNet

A content delivery network or content distribution network (CDN) is a globally distributed network of proxy servers deployed in multiple data centers. The goal of a CDN is to serve content to end-users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks.

Content providers such as media companies and e-commerce vendors pay CDN operators to deliver their content to their audience of end-users. In turn, a CDN pays ISPs, carriers, and network operators for hosting its servers in their data centers. Besides better performance and availability, CDNs also offload the traffic served directly from the content provider’s origin infrastructure, resulting in possible cost savings for the content provider.[1] In addition, CDNs provide the content provider a degree of protection from DoS attacks by using their large distributed server infrastructure to absorb the attack traffic. While most early CDNs served content using dedicated servers owned and operated by the CDN, there is a recent trend[2] to use a hybrid model that uses P2P technology. In the hybrid model, content is served using both dedicated servers and other peer-user-owned computers as applicable.

And why do HaBangNet Hosting customer do not need it?

The answer is simple, because HaBangNet using 1 website to 3 different locations server. Which mean, content on your website is store on 3 different location. Asia, USA and Europe.

So if your user from Europe surfing your website, the “Call” from our anycast dns system will auto point your user to your wesbite content located at the nearest range.

And by default all customer website is protected with HaBangNet DDoS Protection up to 10Gbps at no extra cost.

More information on HaBangNet Global Web Hosting Services, visit here – http://www.habangnet.com

FTP Connection Error : Error loading directory?

We notice, since the latest cpanel update, there is a number of people using VPS or Dedicated Server with cPanel encounter this issue.

Please follow these step to solve it, if you’re a managed VPS or Server with HaBangNet, please submit a ticket to support. And we will fix it for you.

What is described looks like the customer is using passive-mode FTP and a port range is not open in the firewall to match the port range used by the FTP service. There are two options to fix this:

  • Use active-mode FTP instead of passive. This is normally selectable in the FTP client. In the command-line FTP client, you can simply type “passive” to toggle passive/active mode.
  • Configure a port range for passive-mode FTP in the FTP service configuration, and configure the server’s firewall to match.

The second option has been covered extensively in this forum, as it is a very common issue. I found the following threads that should help:


If you are using Pure-FTPd, which is the default, you can define the passive-mode port range by editing /etc/pure-ftpd.conf and uncommenting the following directive:

# Port range for passive connections replies. - for firewalling.

# PassivePortRange          30000 50000

Once you have removed the hash mark (#) from the line starting with “PassivePortRange”, restart Pure-FTPd and edit your firewall configuration to allow traffic on the same port range.


Guide bought to you by HaBangNet – Global Web Hosting Service

5 commands to check memory usage on Linux

Memory Usage

On linux, there are commands for almost everything, because the gui might not be always available. When working on servers only shell access is available and everything has to be done from these commands. So today we shall be checking the commands that can be used to check memory usage on a linux system. Memory include RAM and swap.

It is often important to check memory usage and memory used per process on servers so that resources do not fall short and users are able to access the server. For example a website. If you are running a webserver, then the server must have enough memory to serve the visitors to the site. If not, the site would become very slow or even go down when there is a traffic spike, simply because memory would fall short. Its just like what happens on your desktop PC.

1. free command

The free command is the most simple and easy to use command to check memory usage on linux. Here is a quick example

$ free -m
             total       used       free     shared    buffers     cached
Mem:          7976       6459       1517          0        865       2248
-/+ buffers/cache:       3344       4631
Swap:         1951          0       1951

The m option displays all data in MBs. The total os 7976 MB is the total amount of RAM installed on the system, that is 8GB. The used column shows the amount of RAM that has been used by linux, in this case around 6.4 GB. The output is pretty self explanatory. The catch over here is the cached and buffers column. The second line tells that 4.6 GB is free. This is the free memory in first line added with the buffers and cached amount of memory.

Linux has the habit of caching lots of things for faster performance, so that memory can be freed and used if needed.
The last line is the swap memory, which in this case is lying entirely free.

2. /proc/meminfo

The next way to check memory usage is to read the /proc/meminfo file. Know that the /proc file system does not contain real files. They are rather virtual files that contain dynamic information about the kernel and the system.

$ cat /proc/meminfo
MemTotal:        8167848 kB
MemFree:         1409696 kB
Buffers:          961452 kB
Cached:          2347236 kB
SwapCached:            0 kB
Active:          3124752 kB
Inactive:        2781308 kB
Active(anon):    2603376 kB
Inactive(anon):   309056 kB
Active(file):     521376 kB
Inactive(file):  2472252 kB
Unevictable:        5864 kB
Mlocked:            5880 kB
SwapTotal:       1998844 kB
SwapFree:        1998844 kB
Dirty:              7180 kB
Writeback:             0 kB
AnonPages:       2603272 kB
Mapped:           788380 kB
Shmem:            311596 kB
Slab:             200468 kB
SReclaimable:     151760 kB
SUnreclaim:        48708 kB
KernelStack:        6488 kB
PageTables:        78592 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6082768 kB
Committed_AS:    9397536 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      420204 kB
VmallocChunk:   34359311104 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB                                                                                                                           
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       62464 kB
DirectMap2M:     8316928 kB
Global VPS Hosting

Check the values of MemTotal, MemFree, Buffers, Cached, SwapTotal, SwapFree.
They indicate same values of memory usage as the free command.

3. vmstat

The vmstat command with the s option, lays out the memory usage statistics much like the proc command. Here is an example

$ vmstat -s
      8167848 K total memory
      7449376 K used memory
      3423872 K active memory
      3140312 K inactive memory
       718472 K free memory
      1154464 K buffer memory
      2422876 K swap cache
      1998844 K total swap
            0 K used swap
      1998844 K free swap
       392650 non-nice user cpu ticks
         8073 nice user cpu ticks
        83959 system cpu ticks
     10448341 idle cpu ticks
        91904 IO-wait cpu ticks
            0 IRQ cpu ticks
         2189 softirq cpu ticks
            0 stolen cpu ticks
      2042603 pages paged in
      2614057 pages paged out
            0 pages swapped in
            0 pages swapped out
     42301605 interrupts
     94581566 CPU context switches
   1382755972 boot time
         8567 forks

The top few lines indicate total memory, free memory etc and so on.

4. top command

The top command is generally used to check memory and cpu usage per process. However it also reports total memory usage and can be used to monitor the total RAM usage. The header on output has the required information. Here is a sample output

top - 15:20:30 up  6:57,  5 users,  load average: 0.64, 0.44, 0.33
Tasks: 265 total,   1 running, 263 sleeping,   0 stopped,   1 zombie
%Cpu(s):  7.8 us,  2.4 sy,  0.0 ni, 88.9 id,  0.9 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   8167848 total,  6642360 used,  1525488 free,  1026876 buffers
KiB Swap:  1998844 total,        0 used,  1998844 free,  2138148 cached

  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND                                                                                 
 2986 enlighte  20   0  584m  42m  26m S  14.3  0.5   0:44.27 yakuake                                                                                 
 1305 root      20   0  448m  68m  39m S   5.0  0.9   3:33.98 Xorg                                                                                    
 7701 enlighte  20   0  424m  17m  10m S   4.0  0.2   0:00.12 kio_thumbnail

Check the KiB Mem and KiB Swap lines on the header. They indicate total, used and free amounts of the memory. The buffer and cache information is present here too, like the free command.

5. htop

Similar to the top command, the htop command also shows memory usage along with various other details.


The header on top shows cpu usage along with RAM and swap usage with the corresponding figures.

RAM Information

To find out hardware information about the installed RAM, use the demidecode command. It reports lots of information about the installed RAM memory.

$ sudo dmidecode -t 17
# dmidecode 2.11
SMBIOS 2.4 present.

Handle 0x0015, DMI type 17, 27 bytes
Memory Device
        Array Handle: 0x0014
        Error Information Handle: Not Provided
        Total Width: 64 bits
        Data Width: 64 bits
        Size: 2048 MB
        Form Factor: DIMM
        Set: None
        Locator: J1MY
        Bank Locator: CHAN A DIMM 0
        Type: DDR2
        Type Detail: Synchronous
        Speed: 667 MHz
        Manufacturer: 0xFF00000000000000
        Serial Number: 0xFFFFFFFF
        Asset Tag: Unknown
        Part Number: 0x524D32474235383443412D36344643FFFFFF

Provided information includes the size (2048MB), type (DDR2) , speed(667 Mhz) etc.


All the above mentioned commands work from the terminal and do not have a gui. When working on a desktop with a gui, it is much easier to use a GUI tool with graphical output. The most common tools are gnome-system-monitor on gnome and ksysguard on KDE. Both provide resource usage information about cpu, ram, swap and network bandwidth in a graphical and easy to understand visual output.

CentOS Web Panel – Installation

HaBangNet Shared Hosting

Now you are ready to start CWP Installation
CWP installer can run more than 30 minutes, because it needs to compile apache and php from source.

We have CWP installation with default CentOS MySQL version 5.1 and the latest MariaDB as additional option.

Installer with MySQL version 5.1

cd /usr/local/src
wget http://centos-webpanel.com/cwp-latest
sh cwp-latest

Installer with MARIA-DB 10.1.10

cd /usr/local/src
wget http://centos-webpanel.com/cwp-latest
sh cwp-latest -d mariadb

If download link doesn’t work then you can use the following: http://dl1.centos-webpanel.com/files/cwp-latest

Reboot Server
Reboot your server so that all updates can take affect and to start CWP.


HaBangNet Global DNS

This dns is only available for customer using our service. And not open for public use.

  1. dns1.hbndns.net (Fast Loading in USA Location)
  2. dns2.hbndns.net (Fast Loading in USA Location)
  3. dns3.hbndns.net (Fast Loading in Asia Location)
  4. dns4.hbndns.net (Fast Loading in Europe Location)
  5. dns5.hbndns.net (Fast Loading in Europe Location)

Our DNS is build from cloud base network with 100% Network Uptime. With internal anycast setup for best routing and pointing in Geolocation detect.

If you hosted with us, you can use the dns setting for your domain name server pointing for result worldwide.

Free & Public DNS Servers

Global VPS Hosting

Your ISP automatically assigns DNS servers when your router or computer connects to the Internet via DHCP… but you don’t have to use those.

Below are free DNS servers you can use instead of the ones assigned, the best and most reliable of which, from the likes of Google and OpenDNS, you can find below:

Free & Public DNS Servers (Valid March 2016)

Provider Primary DNS Server Secondary DNS Server
Comodo Secure DNS
OpenDNS Home5
DNS Advantage
Norton ConnectSafe6
Alternate DNS11
Hurricane Electric14

Note: Primary DNS servers are sometimes called preferred DNS servers and secondary DNS servers are sometimes called alternate DNS servers. Primary and secondary DNS servers can be “mixed and matched” to provide another layer of redundancy.

Why Use Different DNS Servers?

One reason you might want to change from the DNS servers assigned by your ISP is if you suspect there’s a problem with the ones you’re using now.