Odin SQL

Programming, software and technology

  • Blog
  • About
  • Contact
You are here: Home / Archives for web development

.NET Core runtime CoreCLR released on GitHub

February 4, 2015 by Benjamin Knigge Leave a Comment

coreclr opensource on github

coreclr opensource on github

 

If you’re a .NET developer you’re probably exited to hear that the .NET Core run time AKA CoreCLR has been released on GitHub.

The code currently is running on Windows with support for Linux and Mac promised in the cumming months. Very soon we will have the ability to write MVC 6 web apps in C# or your favorite .NET language using Visual Studio 2015 and then deploy to a Linux servewith official support from Microsoft.

What is .NET Core?

.NET Core is basically a stripped down and streamlined and rewritten version of the .NET framework removing all of the windows specific code and functionality. .Net Core will provide the core functionality of just in time compilation and garbage collection of your .NET code and will allow support running and deploying ASP.NET vNext Model View Control (MVC) web apps on Windows, Linux and Mac.

Why did Microsoft release CoreCLR as open source?

It isn’t altruism that has led Microsoft to open up CoreCLR. Over the past several years Microsoft has been loosing market share on the web server. PHP has been around for years and despite it’s bad design and numerous security issues it doesn’t seem to be going anywhere.  Ruby has been gaining ground for years, Java won’t die despite Oracles best efforts and many new start ups are taking a look at Go.  More than a few .NET developers are tempted to leave the Microsoft development world behind and take up development with one of these open source alternatives. Developers and development tools have been a major reason for Microsoft past successes. Microsoft is scared of losing these developers and the the “enterprise” customers that they work for, which make up the majority of their profits.  Opening up ASP.NET vNext and CoreCLR and giving away Visual Studio  for free is Microsoft’s attempt at keeping as many developers as possible in the Microsoft sphere of influence while at the same time hopefully attracting a few additional developers. They are hoping that once a developer gains experience working on C# in Visual Studio they are more likely to work on a desktop app using C# and Visual Studio than abandoning windows entirely for Linux.

Why am I so excited about CoreCLR on Linux?

I’ve been working with ASP.NET since it was in beta with brief hiatuses while working on Java, Perl and PHP web sites. During that time I’ve come to appreciate Visual Studio and C#.  C# is a great programming language and Visual Studio is undoubtedly the best IDE available today. Unfortunitly I’ve found myself working less with C# over the past couple of years. This has primary been due to the fact that deploying ASP.Net apps on Windows Servers is significantly more costly than deploying apps on an inexpensive Linux VPS. Soon the cost of deployment should no longer be an issue. I’m looking forward to writing ASP.NET MVC apps in C# using Visual Studio and deploying them to a Nginx web server.

Another area of concern that I’ve has is the performance of .NET apps vs. apps written in code that is natively complied such as C and Go. Fortuitously Microsoft has also announced .NET Native. Apps compiled with .NET Native utilizes significantly less memory and provides greater performance than apps using the just in time compiled common language runtime.

A year ago I doubt anyone would have believed you if you told them that Microsoft would be releasing ASP.NET for Linux and now it looks like it will be here by summer 2015.

 

Leave me a comment and Let me know your thought on .NET Core, ASP.NET vNext and .NET Native.

Filed Under: programming, web development Tagged With: .NET, ASP.NET, C#, Linux, nginx

A comparison of WordPress caching options

January 13, 2015 by Benjamin Knigge 2 Comments

Wordpress logo

WordPress logo

 

I’ve spent the past couple of days bringing my site up and down while comparing the many caching options available to WordPress. This is the synopsis of my research. Hopefully this information can help someone else make a more informed decision about the many caching options available.

If you want to implement any of these you’re going to need to be hosting WordPress on your own VPS unless your hosting company is willing to implement caching for you.

The server that this was tested on is a $10 1 CPU Linode VPS with SSD.  The memory and CPU constraints of a low-cost VPS guided my decision-making process. If I had more RAM available I probably would have used memcached more despite the fact that the performance I’m able to get out of an SSD is great.  The initial installation of Nginx was done using EasyEngine which greatly simplifies the configuration required for WP Super Cache or W3 Total Cache. I’ve outlined getting WordPress up and running using EasyEngine here. I’m not going  into details in how to configure each one of these options I just wanted to outline some the pros and cons of the options that I have implemented.

What is Caching?

If you’re reading this you already know. When a request is made for certain content the response from your web server can be stored for a predetermined amount of time either on disk or in memory.  That storage is referred to as a cache.

Why cache?

Your website will serve dynamically generated pages much more quickly while putting your server under far less load.

Which caching options were compared?

  • WP Super Cache
  • W3 Total Cache
  • WP-FFPC
  • Pagespeed for Nginx
  • Nginx Proxy Cache

WP Super Cache

WP Super Cache is probably the most popular caching plugin for wordpress. I used EasyEngine which generates the the necessary nginx config and stores it at /etc/nginx/common/wpsc.conf I was able to server 15,000 page requests per minute using it and the response times were good.  Nginx is very good at serving up static content quickly with minimal system requirements. The only negative about it is that the settings revert to the default and need to be reset if you disable the cache and then re-enable it which is fairly annoying.

W3 Total Cache

The second most popular cache plugin for WordPress W3 Total Cache. It has a lot of options that allow caching objects and database queries in memcached but if you’re using it to generate static pages those options aren’t extremely useful. If you have a word press site with several thousand posts it’s possible that those options could speed up access to less frequently visited posts but it’s not the case with my blog so I didn’t see any befit from storing objects in memcached. It performed approximately equivalently to WP Super Cache but I would recommend WP Super Cache over W3 Total Cache because of the simplicity and the fact that W3 Total Cache has an incredibly annoying popup nag screen every time you open the setting page.

WP-FFPC

WP-FFPC is another cache plugin for WordPress that can be configured to cache your PHP pages to memcahed which theoretically should be much faster than disk based caching.  Unfortunately the plugin wasn’t playing well with my system and some pages were being cached as completely blank.  It seems that this is an issue that more than one other person in experiencing and I wasn’t able to find a solution to the problem. I also had some concerns about the limited amount of memory available on my VPS. This could be a good solution if you’re on Apache and have a decent amount of available memory but it wasn’t working on Nginx with PHP-FPM for me.

Pagespeed

The pagespeed module is available fo Nginx and Apache caches and optimized the content being server. On Nginx it actually needs to be compiled as part of the server. It analyses the content of the pages and images being served and can make changes to them such as removing comments and white space, minifying and combining js and css, and optimizing images. This all can theoretically reduce the amount of time that it takes to download and render a page.  Pagespeed actually accesses and optimizes not only the page that you are serving but will access and optimize the content of all the included page resources such as js, css and images and can then store the results in either on disk or in memcached.

I had numerous problems with Pagespeed each “fetch” to an included resource is handled in a separate thread which isn’t ideal for a server with only one CPU. Even after the request is cached requests are handled much more slowly than when compared to the static caching options and in addition I was never able to configure Pagespeed to server more that 1000 page requests per minute reliably. I also had an issue with a cached and optimized image being corrupt. I finally gave up on dealing with pagespeed after configuring an Nginx proxy cache in front of it and then discovering that pagespeed sets the headers so that the resulting content won’t be stored in the cache and there’s no simple way to change this behavior in pagespeed.  A lot of the functionality that’s built into page speed is already handled by other WordPress plugins and using plugins and a a static page cache is a much easier option resulting in better response times.  These are my results of a load test with 1000 page requests per minute with pagespeed enabled. These we the best results I managed to get out of pagespeed with several hours of effort.  1/15th the number of simultaneous page quests and 4X the response time.

Nginx Proxy Cache

A proxy cache sits in front of your actual web server and depending on the data being requested will either serve a cached version of the data or pass the request for the data onto another webserver. A proxy cache can exist on the same physical server or on a separate one and can also work as a load-balancer between multiple servers. In order to get this to work in WordPress it required adding a couple of lines of code to the wp-config.php so that the site would stop trying to redirect back to the secure version of its self. The issue and resolution are discussed here.    Implementing this takes a bit more effort and research than a plugin like WP Super Cache but it also performs a bit better and the configuration can be easily updated adding in load balancing or fail over.

After implementing this I was able to handle 20,000 page requests per minute fairly well. Here are the results of the load test.

 

Conclusion

If you’re looking for a caching option that is the simplest to implement while offering great performance WP Super Cache is an excellent option.

WP 3 total Cache has a lot of options that will be of no use for the vast majority of people using it. It’s also sporting an annoying repetitive nagging popup.

WP-FFPC wasn’t playing well with Nginx for me but if you’re on Apache and have enough available memory it could be a good option for you.

Pagespeed actually made everything slower and dramatically decreased the number of simultaneous users that my site could support. It may work well on a dedicated server with many processors but on a low end VPS it’s literally worse than not implementing any caching at all.

Nginx proxy cache offers better performance than any of the WordPress plugins. You can put it in front of Apache or any website if that’s your preference. Fail-over and load-balancing are also both possible with a proxy cache. If at some point I upgrade my $10 Linode VPS to something with more memory I can even configure Nginx to use memcached instead of disk.

Filed Under: web development Tagged With: nginx, Wordpress

How to cloak affiliate links with Nginx

January 9, 2015 by Benjamin Knigge 1 Comment

Nginx logo

Nginx logo

What’s covered in this post

In this post I cover how to set up 301 redirects for affiliate links quickly and efficiently without the use of any plugin or redirect script using only the built in functionality of Nginx.

 

Why you should cloak your affiliate links

Analytics

I love numbers and the most accurate way to track if a link is clicked is by logging each click.  If you’re serous about making money off of your affiliate links you’re going to want to know how much each click  is worth for each program.

If you are relying on something like Google analytics, which is dependent on JavaScript being run on your web site, you could be missing a significant amount of data.  You will not be seeing links that were shared on twitter, via email or for users that have disabled JavaScript. The method I’m going to outline will log every click to an affiliate link via your servers access logs.

Search Engine Optimization (SEO)

Google and other search engines will penalizes sites for using affiliate links that are not marked as “nofollow” links. This solution will use a robots.txt file to disallow any following.

Easy to remember

A short link like “https://r.odinsql.com/digitalocean”  is easier to remember than “https://www.digitalocean.com/?refcode=f1ad4e30b2a2”

Easier to update

Lets say that one of the affiliate programs that you’re taking part of has a special offer at a different URL that will result in a larger number of conversions that you’re usually getting.  It’s a whole lot easier to update the link in a centralized location than to have to go through every blog post and update each link individually.

Why I use Nginx for affiliate link redirection

Performance

Nginx is incredibly fast at serving redirects and logging then. In the world on affiliate marketing even a few milliseconds of extra processing time can end up costing you money. You won’t find a script or plugin that will handle redirects as quickly as Nginx natively can.

Log analysis

Each click will be logged and can later be analyzed using a log analysis tool such as AWStats.

It’s free

If you already have an Nginx web server there’s no need to pay for a plugin or or script.

 

How to use Nginx for affiliate link cloaking

Create a sub-domain

First we’ll need to create a sub-domain used exclusively for redirection. In my case I’ve created the sub domain r.odinsql.com

This needs to be done with whom ever is hosting your DNS records.  I use CloudFlare  to host my DNS records. If you’re not already using a free CloudFlare for content distribution I highly recommend them.

Create a website for the new sub-domain

If you need instructions on how to configure and setup a website using Nginx take a look at my tutorial on how to do it with EasyEngine. EasyEngine is literally the easiest and most fool proof way of setting up a new website on a VPS.

Create a robots.txt file

This new website for your sub-domain is going to have a single file “robots.txt”

This is what your robots.txt should look like.

User-agent: *
Disallow: /

This tells the search engines that you don’t want them accessing anything on this site.

Put your redirect links in the new sites Nginx config file

This is what the actual config file for r.odinsql.com looks like

# HTML NGINX CONFIGURATION

server {
listen 80;
listen [::]:80;

server_name r.odinsql.com;

access_log /var/log/nginx/r.odinsql.com.access.log;
error_log /var/log/nginx/r.odinsql.com.error.log;

root /var/www/r.odinsql.com/htdocs;
index index.html index.htm;

error_page 404 = @foobar;
error_page 403 = @foobar;

location / {
try_files $uri $uri/ /index.html;
}

location @foobar {
return 301 https://odinsql.com;
}
location /hireme
{
return 301 https://www.elance.com/s/benjaminfredrick/;
}
#cloudflare
location /cloudflare
{
return 301 https://cloudflare.com;
}
#hosting sites

#linode
location /linode
{
return 301 https://www.linode.com/?r=dc324ab41e66f3facfcc6eff74acfe74a414e739;
}

location /linode-n
{
return 301 https://www.linode.com/;
}

#digitalocean
location /digitalocean
{
return 301 https://www.digitalocean.com/?refcode=f1ad4e30b2a2;
}

location /digitalocean-n
{
return 301 https://www.digitalocean.com;
}

#vultr
location /vultr
{
return 301 http://www.vultr.com/?ref=6817742;
}

location /vultr-n
{
return 301 http://www.vultr.com;
}

location /cloudways
{
return 301 http://www.cloudways.com/en/?id=18942;
}

location /bluehost
{
return 301 http://www.bluehost.com/track/odinsql/;
}

location /amazon
{
return 301 http://www.amazon.com;
}
#ramnode
location /ramnode
{
return 301 http://www.ramnode.com;
}

#sendgrid
location /sendgrid
{
return 301 http://mbsy.co/sendgrid/17518355;
}

#loaderio
location /loaderio
{
return 301 https://loader.io/s/xewK7;
}

location /loaderio-n
{
return 301 https://loader.io;
}

#amazon
location /amazon-ec2
{
return 301 http://aws.amazon.com/ec2/?_encoding=UTF8&camp=1789&creative=9325&linkCode=ur2&tag=o0c22-20&linkId=744HAUET442HAPV4;
}
}

I’ve created a location block for each affiliate link. The location is just a short name that I use for the affiliate link followed by a 301 redirect to the actual affiliate URL.  Any 404 or 403 errors result in a redirect back to my blog odinsql.com. If I need to add or edit a link I open the config edit it and then run

sudo service nginx reload

This reloads any updates that I’ve made to the config into Nginx.

For now I don’t have that many affiliate links. As I continue to blog and add more affiliate links I may decide to break this config file into separate smaller includes. For now it’s a quick and easy solution that’s fast and free.

 

If you’re in need of a quality VPS host to test this out on I would recommend Vultr. TVultr offers low cost high performance VPS with SSD whit hourly and monthly billing options available.

Sign up for Vultr

 

That’s it for this post it might be a bit too technical of the average affiliate marketer but it’s how I cloak affiliate links. If you have a question about this please leave me a comment. If you would like to stay updated regarding this blog join my mailing list by filling out the form on the right hand navigation.

Filed Under: web development Tagged With: nginx, SEO

PHP – Maximum upload file size

January 7, 2015 by Benjamin Knigge Leave a Comment

What does this post cover?

In this post I cover how to update the php maximum file upload size. Although this post is php-fpm specific the same variables will need to be edited in what ever version of PHP you are using. I also cover updating nginx.conf which is nginx specific.

Why this topic?

I manage the server for a client that is hosting WordPress on a Linode VPS. Recently they’ve decided to start doing a pod cast. When my client went to upload the audio for the podcast they encountered the following error in WordPress.

Maximum upload file size: 8mb

8mb is the default post_max_size set in the php.ini that is part of the php5-fpm package for Ubuntu 14.04 other distributions may see a similar error but with a different value

After checking and updating all of the relevant values it was suggested, and I agreed to write a brief blog post about what needs to be updated.

So if you’re encountering an error like “Maximum upload file size: 8M” here’s the solution.

Update php.ini

you will need to edit the following two variables in your php.ini

The default location for the php.ini on Ubuntu 14.04 with the php5-fpm package is /etc/php5/fpm/php.ini

post_max_size = 100M

upload_max_filesize = 100M

In the above example i’ve set the size to 100M for 100 mega bytes.  For 100 kilobytes you would use 100K and if you wanted to set the values to 1 giga byte you could use 1G. So the value is a number followed by  K, M or G without a space. You should only set the size to a maximum file size that you are likely to upload. Although it’s tempting to set the values to an enormous size this could result in the exustion of all system resources and crashing your server, so be reasonable. The value for “upload_max_filesize” is only valid up to the the limit imposed by “post_max_size”. In practice you will most likely set them both to the same value.

Update nginx.conf

You will also need to edit your nginx.conf located by default at /etc/nginx/nginx.conf on Ubuntu 14.04 when Nginx is installed via apt-get.

edit the the “client_max_body_size” in the http block of your nginx.conf

client_max_body_size 100m;

Restart PHP

You will need to restart php5-fmp before the changes take effect. Here’s how to do that on Ubuntu.

sudo service php5-fmp restart

Reload Nginx

You will need to reload the ngixn.conf before the changes that you’ve made take effect. Here’s how you reload the nginx.conf

sudo service nginx reload

 

That’s it. You should now be able to upload your files without encountering the error “Maximum upload file size: 8mb”

 

After reading through all of this you’ve decided that you would rather leave the management of you server in the hands of professionals have a look at

CloudWays Managed VPS

CloudWays offers VPS management on top of DigitalOcean’s network

If you were encountering this error I hope that this post has been helpful. If you have any comments, questions, or suggestions please let me know in the comments section below.  If you would like to have me answer your individual questions please join my mailing list by submitting the form on the right hand navigation of this page.

Filed Under: web development Tagged With: nginx, PHP, php-fmp, Wordpress

The importance of load testing a website (for free)

December 29, 2014 by Benjamin Knigge 3 Comments

So you’ve got your site up or you’ve just finished your awesome new mobile app, but how many users can it handle before it comes to a screeching halt leaving a negative impression on your end users and pissing off your boss, investors or partners?

Why is load testing important?

The worst time to discover that your current infrastructure can only handle 100 simultaneous users is when you have 1000 simultaneous users trying to access your site after you’ve spent a significant amount of money on marketing, or if you’re lucky enough to have a positive article about your company publish in a major newspaper or magazine.

If you plan on 10,000 simulations users you need to test for at least that many users. It’s easy to write a web app that can respond to one user at a time it’s significantly more difficult to write a web app that can scale up to 100,000 or a million simultaneous users.

Many of my freelance clients approach me in extreme desperation after hiring the lowest bidder on one of the many freelance development marketplaces and discovering that  their site becomes extremely slow or even unresponsive with only a few simultaneous users. The developers they hired were quick to push out a site that has an illusion of functionality but a few weeks later is more of a liability than an asset. These clients are never happy when I tell them that building a web site that scales well can be significantly more expensive and time consuming than what they’ve currently invested.

 

How many users do you plan on?

Currently this blog is only 3 weeks old and is averaging about 500 unique users per day with my highest day being 1200. Ideally I would like to have 100,000 unique users per day (I can dream) on average but the requests are not distributed evenly throughout a 24 hour period. The majority of my users are visiting from late morning to early evening in North American time zones. This traffic pattern is fairly typical for a website targeted at native English Speakers. The United States and Canada make up the majority of the native English speaking population on the web.  I’m getting a decent amount of traffic from the UK and Australia but they’re not typically visiting during my peak hours of utilization.

My busiest hour is between 4 – 5 pm eastern standard time and during that 1 hour I’m getting about 20% of my blog traffic for the day.  If that patters hold true while my site continues to grow and gain a following, will I be able to handle the traffic on my current $10 a month Linode VPS? If my plan is 100,000 I will need to test for 20,000 per hour or 334 per minute during my busiest hour.

 

How to load test?

There are numerous services and testing frameworks available.  I’ve gone with loader.io because it’s easy to use and there is a free plan available that can simulate up to 10,000 simulates users. they are owned by SendGrid so I don’t think they’re going to disappear anytime soon.

Setting everything up and getting started is pretty simple. You register for free (actually free no credit card required). They send you a confirmation email.  You validate your email and then you’re ready to add a site. After entering your domain name you have to validate that the site is yours. They do this by having you upload a text or html file containing a unique string. This step is necessary to prevent malicious users from using their service to overwhelm a website.  You’re basically designing a test to discover the limits of your web site that is almost identical to a denial of service attack if your web site can’t handle the load that you’re going to be sending at it. I’ve included a screen-cap below of the test adding interface it’s all pretty simple.  You point them at a URL and tell them how many users you want to simulate over a given period of time. You can start the test immediately or schedule the test for when you your site is usually under light load.  It’s not a good idea to load test a live site that you or your clients are dependent on for revenue.  You may want to point the test at a development server.

It’s important to note that the free version of loader.io only loads a single URL and doesn’t load any included js, css or images or run any javascript that may be making api calls.  This isn’t a problem for me since CloudFlare is serving up all of my static content and I’m not making any api calls. If you are making many API calls in JavaScript then this might not be a suitable solution for your load testing needs.

Loader.io add a test interface

Loader.io add a test interface (click to enlarge)

 

What do the results look like?

I’m fairly confident in the way I’ve set up this WordPress blog It may only be a $10 Linode VPS that it’s running on but I’ve configured WP Super Cache and Nginx is really good at serving static files. I’m also using CloudFlare as my content distribution network. I think that my site can handle 10,000 users per minute.  Let’s see if I’m correct.

loader.io results for OdinSQL

loader.io results for OdinSQL (click to enlarge)

So it looks like sending 10,000 requests at my server in a minute resulted in 100% success. If my site is capable of serving 10,000 users in a minute it should be fine handling  100 – 200 thousand users per day without any problem.

If you would like to see more detailed results from my test here’s a link to the results on loader.io

 

Update:

After my initial post I decided to  to test the limit of my current configuration.  I ran tests with 15,000, 17,500 and 20,000 requests per minute.

I handled 15,000 requests pretty well.  –15,000 report–

At 17,500 my response times were starting to get pretty bad but the site was still functional. –17,500 report–

20,000 requests per minute brought this site down response times were over 3 seconds and about 1/3 of the requests resulted in a timeout error.  I had to abort the test at 55 seconds.  –20,000 report–

So now I know If I’m getting more than 15,000 page requests per minute I’ll need to do some upgrades.  Realistically though I would upgrade long before that.

If you test your site, please post the results here in the comments. Let us know what kind of hosting you are using and your configuration. If you’re results aren’t that great I’ll try to give you some advice and help you improve them.

 

If you would like to test your own site
Try loader.io for Free

 

If you’re in need of a great web host I highly recommend that you

Sign up for a Linode SSD VPS

After reading through all of this you’ve decided that you would rather leave the management of you server in the hands of professionals have a look at

CloudWays Managed VPS

CloudWays offers VPS management on top of DigitalOcean’s network

That’s it for this post. I hope that you’ve found the information it contains useful. If you have any questions, criticism, advise or suggestions please leave me a comment below and let me know.  If you would like to stay updated about new content on OdinSQL.com I would like to invite you to join my email list by filling out the short form on the right side navigation.  I promise not to fill your inbox with spam if you do.

Filed Under: Hosting, programming, web development Tagged With: loader.io, SendGrid, Wordpress

Who’s behind Odin SQL

Benjamin KniggeWhen I'm not traveling, writing code or optimizing stored procedures I can often be found working on my blog. Learn More…

Email Newsletter

Sign up to the Odin SQL newsletter and receive the latest posts and custom personalized content.

Recent Posts

  • How I improved my WordPress GTmetrix grade
  • A Vultr walk-through
  • DreamHost coupons and promo codes
  • WPEngine coupons and promo codes
  • GoDaddy vs. HostGator vs. BlueHost vs. DreamHost

Categories

  • Hosting
  • Meta
  • programming
  • Software
  • spam of shame
  • Uncategorized
  • web development
  • wordpress

Featured Post

How I improved my GTmetrix page speed and yslow scores.

How I improved my WordPress GTmetrix grade

I’ve managed to improve my GTmetrix page speed and Ylow grades from the mid 70’s to 98% for Page Speed and 97% for YSlow with a page load time of only 1.18 seconds. Here’s a link to the report (if the link is dead feel free to re-run the test). In this post I’m going to […]

Featured Post

CPU utilization

Linode vs. DigitalOcean vs. Vultr vs. RamNode

Comparing SSD cloud VPS hosting providers can be difficult A virtual cpu core at one hosting provider won’t be equivalent to a virtual cpu core at another host or even another virtual cpu core at the same host depending on the specifications of the underlying hardware and the underlying hardware’s utilization at the time of […]

Featured Post

CloudFlare

Free SSL with CloudFlare, OpenSSL and Nginx on Ubuntu

How to Save up to $750 in SSL fees for free In this post I’m going to show you how to set up your site so that you will have an absolutly free fully SSL secured site that automatically redirects non secure HTTP traffic to your secure HTTPS URL.  A basic SSL certificates commonly cost $45 – $75 per year. […]

Copyright © 2025 — OdinSQL.com