Author Archives: Alex

Ubuntu Server 7.04 Fiesty: How to Setup a Subversion Repository on Apache 2.0 using libapache2-svn

A few months ago I spent many hours pulling out my hair in a long attempt to get a Subversion repository accessible through Apache 2.0. The main benefit was the ability to browse through your repository using a web browser and the ability to integrate with the Trac Project Management tool.

The first step is to setup apache2, subversion, and libapache2-svn on your server. Login and type:

sudo apt-get install subversion apache2 libapache2-svn

Next restart the server. Type:

sudo /etc/init.d/apache2 restart

Since we will be using authenticating against specific users in the repository, we also need an SSL certificate for encryption. Type:

sudo a2enmod ssl
sudo apache2-ssl-certificate

 
After creating your SSL certificate, we need to enable port 443 on your server. To do this, edit your ‘/etc/apache2/ports.conf’ file and the line ‘Listen 443’. Your file should look like this:

Listen 80
Listen 443

Next you need to create a virtual host file for your server. If you would like more information on Apache 2.0 and virtual hosts, consult this article. Type:

sudo touch /etc/apache2/sites-available/svn.alexkuo.info

Now you will need to edit your new virtual host file and make it similar to the following.

NameVirtualHost *:443
<VirtualHost *:443>
ServerAdmin me@alexkuo.info
ServerName svn.alexkuo.info


<Location "/">
DAV svn
SVNPath /var/svn/akuo
AuthType Basic
AuthName "ALex Kuo's Repository"
AuthUserFile /etc/apache2/dav_svn.passwd
<Limit GET PROPFIND OPTIONS REPORT>
Require valid-user
</Limit>
</Location>

CustomLog /var/log/apache2/svn.alexkuo.info.custom.log combined
ErrorLog /var/log/apache2/error.svn.alexkuo.info.log

SSLEngine on
SSLCertificateFile /etc/apache2/ssl/apache.pem
SSLProtocol all
SSLCipherSuite HIGH:MEDIUM
</VirtualHost>

In the first and lines, ‘NameVirtualHost *:443’ and ‘<VirtualHost *:443>’, tells the server to listen for all names on port 443, which is the SSL port that we specified previously. The ‘ServerName’ variable specifies the domain that this configuration applies to, in this case svn.alexkuo.info. The Location tags can be used to specify different references to repositories. In this example, their is only one – ‘http://svn.alexkuo.info/’.

In the location tag, ‘SVNPATH’ specifies the local path to the repository and the ‘AuthType’ variable specifies authentication type. The setting ‘Basic’ will just prompt for a username/password. The AuthUserFile is the authentication file that will be used to authenticate against when a user atttemps to access the repository. The settings in the <Limit> specify the permissions given to a valid-user.

Next, let’s setup a user for our repository. Type:

sudo htpasswd -c /etc/apache2/dav_svn.passwd svnuser

Now we should setup a repository at the location specified by the variable ‘SVNPath’ in our virtualhosts file. We also need to set the owner to ‘www-data’ and give that group read/write permissions to these directories. To do this type:

sudo svnadmin create /var/svn/akuo
sudo chown -R www-data:www-data /var/svn/akuo
sudo chmod -R g+ws /var/svn/akuo

The last thing we need to do is enable the site and restart Apache. Type:

sudo a2ensite svn.alexkuo.info
sudo /etc/init.d/apache2 restart

You should now be able to access your Subversion repository through the ‘http’ protocol.

Reference Links

Ubuntu Desktop 7.10: Setup HP 1200 Printer

After a short search through the Ubuntu forums, I ran into this post that went into details about setting up a HP printing device. After briefly reading through the instructions, I ran a utility called ‘HPLIP’. ‘HPLIP’ is a program that will automatically download and compile all the necessary files to activate your printer.  The program will ask you a few questions about your computer and request you replug-in your printer at the end of the installation. After doing this, I printed a test page and golly… it actually worked.

Ubuntu Linux: Syncing Documents between Different Computers Using NFS and Unison

The other day I successfully made a full transition from my laptop to my desktop as my primary development environment. The biggest hurdle before completing this transition was transferring and syncing documents between my two laptop and desktop. For quick file transfers, I created a network file share, or NFS, on my desktop, while mounting the drive on my laptop. For a quick overview on how to setup and mount NFS, consult this thread on Ubuntu forums.

I also wanted to sync and compare documents from a centralized server and have the ability to compare differences between a client and a centralized ‘master’ copy. (think Subversion – but without all the permissions and change logging) After a quick Google search, I found a wonderful program called ‘Unison’. This program will allow a user to define a master directory on a server and slave directory on a client. Master being the label for the directory where all clients compare their files and slave as clients that send new files or receive files copied from other clients to the master directory. For directions on installing unison, consult this article on howtoforge.com.

ASP.NET and Retrieving Different Sections of the Current URL using Request.Url

While working on a project in ASP.NET, I needed a function that would retrieve the domain of the current url, however, I also wanted the function to also retrieve the correct ASP.NET development web server path when developing in Visual Studio. After consulting Google, I ran into this old post on Rick Strahl’s blog about the Request.Url object.

After some experimenting, I created a web page in ASP.NET 2.0 that showed what parts of the URL could be returned using different calls. Consult the following in the page load event.

    protected void Page_Load(object sender, EventArgs e)
{
Response.Write("Request.Url.AbsolutePath= " + Request.Url.AbsoluteUri);
Response.Write("<br>");
Response.Write("Request.Url.AbsoluteUri= " + Request.Url.AbsoluteUri);
Response.Write("<br>");
Response.Write("Request.Url.GetLeftPart(UriPartial.Authority)= " + Request.Url.GetLeftPart(UriPartial.Authority));
Response.Write("<br>");
Response.Write("Request.Url.GetLeftPart(UriPartial.Path)= " + Request.Url.GetLeftPart(UriPartial.Path));
Response.Write("<br>");
Response.Write("Request.Url.GetLeftPart(UriPartial.Scheme)= " + Request.Url.GetLeftPart(UriPartial.Scheme));
Response.Write("<br>");
Response.Write("Request.RawUrl= " + Request.RawUrl);
Response.Write("<br>");
Response.Write("Request.Path= " + Request.Path);
Response.Write("<br>");
Response.Write("Request.ApplicationPath= " + Request.ApplicationPath);
Response.Write("<br>");
Response.Write("Request.ResolveUrl= " + ResolveUrl("~/dealer/default.aspx"));
Response.Write("<br>");
Response.Write("GetAuthorityApplicationPath= " + GetAuthorityApplicationPath());
}

private String GetAuthorityApplicationPath()
{
return String.Concat(Request.Url.GetLeftPart(UriPartial.Authority), Request.ApplicationPath);
}

The function GetAuthorityApplicationPath() is what I needed in the end to dynamically retrieve either the domain in a production environment or the development web server url while running Visual Studio (eg. ‘http://localhost:1234/WebDirectory’)

Developing on a Open Source Platform

Lately I’ve been tasked with writing an Administration interface that’s very client heavy for a web application. It uses the Extjs framework and its widgets for building the GUI, Django + Python for the application tier, and PostgreSQL for the database end. We’re using Apache and Ubuntu Server as our platform. The entire application stack is open source, so the acquisition costs for starting development is nil.

In the past few weeks, I’ve developed more insights into the advantages of developing on a completely open source stack. The newest pro I’ve discovered is the documentation and active communities in the larger and popular projects. I know a lot of MS developers moan about the lack of adequate support available on some open source projects, but it’s not true of all them out there. When choosing components for your system, it’s almost a given that strong community support is a requirement. Fortunately in OSS projects, the utility of a project and the general following behind it go hand in hand.

I can recommend with confidence that the community support behind our application stack (Extjs, Django, Python, PostgreSQL) for our system is strong and adequate for any web application projects that you may want to pursue

Ubuntu Desktop: Unplugging/Replugging your Network Cable on your Laptop and Requesting a New Address from DHCP.

Sometimes when working on my laptop, which has Ubuntu Desktop installed on it, I have to move it around and therefore unplug the network cable and switch to wireless or vice versa. Unplugging and replugging your laptop into a network sometimes results in the laptop’s inability to renew its IP address or re-establish a connection with the internet. After a quick Google search, I found this post about the problem.

In order to issue a command similar to ‘ipconfig -renew’ in Windows, open your shell and type the following:

sudo ifdown eth0
sudo ifup eth0

These two commands will renew your IP address and should fix the connection problem. However, there’s a program called ‘ifplugd’ that monitors your network connection and automatically renews your address if this problem occurs. To install this program, open your shell and type:

sudo apt-get install ifplugd

Script tags and Javascript Arrays in IE and Firefox

Tonight I was wrapping up a deployment for a feature that was heavily dependent on javascript. After running my first batch of tests on Firefox, everything passed without a hitch. Then I ran my tests on Internet Explorer 7 and…. nothing.. literally a blank page rendered on my screen. Thus began my latest saga of fixing compatibility issues between browsers.

The first problem was that my scripts were either being downloaded and not executing, or were not downloading at all. I checked if the javascript files were being retrieved by using program called Fiddler2. This program intercepts all HTTP requests  and responses from Internet Explorer. After running the program, it confirmed that my files were being downloaded. So, this means the files were not being executed. After staring at my html code which looked like the following:

...
<body>
<script type='text/javascript' src='/file.js' />
</body>
....

I immediately remembered that Internet Explorer doesn’t render javascript references unless you specify the complete tag. Why? I suspect its because I didn’t specify the doctype in the html tags. So to fix that problem, I changed the script tag to look like this:

...
<body>
<script type='text/javascript' src='/file.js'></script>
</body>

However, that was not the end of my javascript problems with Internet Explorer. After running the javascript, I ran into an error regarding an undefined element in an array. My first thought was that this couldn’t be. The script ran flawlessly in Firefox. After some debugging, I concluded that the Javascript engine in IE interprets text representations for arrays differently than Firefox. The evidence that lead to this conclusion came from comparing the array length returned in Firefox versus Internet Explorer. Firefox’s length was 3, while Internet Explorer’s was 4.

My Firefox-only-array looked something like this:

array = [ {id:1,name:'hi'}, {id:2, name:'hello'}, {id:3, name: 'greetings'}, ]

Turns out that last comma in the array throws off Internet Explorer and causes it to increment that length an extra tick. The fix would be to remove that last comma. The fixed code looks like this:

array = [ {id:1,name:'hi'}, {id:2, name:'hello'}, {id:3, name: 'greetings'} ]

Developing Client Heavy Applications

After sitting down and creating a javascript client, I have to say – this really sucks. I guess moving to a heavier client tier is a natural progression of web development. Back in 1996 or so when I was still in grade school, I started messing around with HTML and basic CGI scripts for creating dynamic web pages. About 2 years later, MySQL became really popular and the first database generated web sites started to appear. The big argument back then was whether application logic should be done in the database end or in the application tier. The languages and platforms I was aware of in the open source world were Perl and PHP3 – both were equally terrible web development tools. These weren’t the only open source tools out there at the time, rather quite the opposite. There was a huge plethora of different web platforms out there. Some still exist but still obscure – others just obscure. Let’s just say things were weird because everyone was still trying to figure out the best way to develop web front ends. For example, it wasn’t unheard of to hear SQL being used to generate HTML.

After PHP4 came out, there was an explosion in web development around the PHP, MySQL, and Apache application stack. About this time, most of the dynamic page generation got moved to the application tier.  Fast forward about 6 years later to the present, we’re seeing more of the page generation executed in the web browser using Javascript or some other platform built into the browser. There are numerous advantages that can be gained by moving page generation to the client: lower load on the servers, improved visual enhancements, and faster execution speed. However, developing interfaces in javascript still feels immature. To me, client side development in javascript is about equivalent to what PHP4 was back in 2001 – it does the job but has a lot of room for improvement.

This doesn’t mean the javascript engines found in IE and Firefox aren’t mature, its actually quite the opposite. Since Javascript has been incorporated in the browser since the 1990s, the engine is very mature and stable. It’s just new requirements and forms of usage for javascript has changed since its original inception, hence why I think the current implementation seems incomplete. By incomplete, I mean issues such as browser compatibility, the occasional memory leak, and I still think development tools have room for improvement, like code completion and debugging.

The current application I’m working on uses the ExtJs framework for generating the Admin UI and Django as the middle tier for parsing requests between the client and the database. So far, the combination is working well.

Ubuntu Desktop: How to Find Your Application Files that Store Your Personal Preferences

Today I was trying to find the location of my chat logs for gaim in Ubuntu and noticed that none of my logs were being found through the File Search program (‘Places’ -> ‘Search for Files’). After digging through a few Google queries, I ran into a blurb about hidden folders prefixed with a period.  It turns out all personal preferences user specific files for your applications are stored in directories with this notation ‘/home/user_name/.program_name’. So all my personal files and settings related to gaim would be stored in ‘/home/alex/.gaim’.

The File Search program ignores hidden files by default. In order to search hidden files, click on the ‘Available Options’ drop down list and select ‘Show hidden and backup files’ and press the ‘Add’ button. This should include all hidden files in your search.

To see a list of the hidden directories in your home folder, open the shell and type the following:

ls -a

By default, your command prompt should open in your home folder. So you shouldn’t need to navigate to ‘/home/user_name/’.

Mirroring an FTP site in Ubuntu Server

The other day I was tasked with mirroring a FTP site, about 5+ gigs of files, on our local server. Mirroring directories is a fairly common task when administering servers, however, the main differences when tasked with this job are the protocols available, whether the job is bi-directional or one way, and the how fast the mirroring needs to occur.

Lucky for me, this job did not demand instantaneous sync and the job was only one way – meaning changes from a server were reflected only from one server. The biggest problem was this job was limited to using only the FTP protocol for mirroring the site. This immediately removed rsync, a popular server/client for syncing directories remotely, as an option. After a quick search through the Ubuntu forums, I stumbled upon a post that detailed several programs on mirroring an FTP site using FTP protocol only. I chose a program called ‘ftpmirror’.

Ftpmirror is a program that lets a user define ‘packages’, which are configuration details for mirroring an FTP site, and scheduling these ‘packages’ to be run daily, monthly, or weekly. To install this program, I typed

sudo apt-get install ftpmirror

If you’re using Ubuntu Server, the configuration files should reside in the ‘/etc/ftpmirror/’ directory. Upon browsing through the directory, you will find a file called ‘ftpmirror.cf-sample’. This file contains a few example ‘packages’ that can be used as a template. The user puts any active ‘packages’ in the ‘ftpmirror.cf’ file. My ‘ftpmirror.cf’ file looks like this:

package = alexkuo_media
ftp-server = master.alexkuo.info
ftp-user = mirror
ftp-pass = password
remote-directory = /media/pics/
local-directory = /home/deploy/media/pics

This package uses the directory ‘/media/pics/’ as the root directory on the ftp server ‘master.alexkuo.info’ and uses the login/password, mirror/password, to login into the remote server. All files, directories, and subdirectories found in the ‘/media/pics’ directory are then downloaded into the ‘/home/deploy/media/pics’ directory on the local machine once the package is activated.

I decided to run this job once a week, so I added the package to the ‘/etc/ftpmirror/list.weekly’ file. To do this, open the ‘list.weekly’ file with a text editor. Mine looks like this:

alexkuo_media

Pretty plain huh? I removed the comments that originally came with the file, so it looks pretty bare. Adding another package involves defining another package in the ftpmirror.cf file and appending the package name on a new line in one of the *.list files.