RCI Logo       Remote Communications Inc.   Home page

mod_gzip for the Apache Web Server Picture of a feather


The official mod_gzip FAQ     [   Download mod_gzip   ]     [   Go to RCI's Home page   ]

Q: What is mod_gzip?
Q: What platforms does mod_gzip run on?
Q: Is mod_gzip a 'Proxy Server'?
Q: Do I need ANY 'extra' Client-side software to use mod_gzip?
Q: How does mod_gzip actually reduce the HTTP content?
Q: What is IETF Content-Encoding?
Q: How can I tell if my browser is able to receive IETF Content Encodings?
Q: Has mod_gzip been tested?
Q: Can I test mod_gzip with standard benchmarking software?
Q: Where can I get an HTTP 1.1 compliant benchmarking tool for Apache?
Q: Am I losing any actual content when using mod_gzip?
Q: Does mod_gzip have any HTML based information screens?
Q: Can the accelerated content be cached?
Q: How do I report a problem with mod_gzip?
Q: What about Remote Communications, the Company?
Q: How do I find out more?
Q: What about related links?

Some common installation questions

Q: How do I add mod_gzip to my existing Apache Web Server?
Q: How do I add compression statistics to my Apache log files?
Q: How do I get mod_gzip to only compress files from certain directories?
Q: How do I compile a new version of mod_gzip.c for my platform?



Q: What is mod_gzip?     [Return to the index]

mod_gzip is a standard Apache Web Server module which acts as an Internet Content Accelerator. Its function in life is to become an integral 'part' of any existing Apache Web Server and see that the content being delivered to YOU, the end-user, is as small and as optimized as possible.

The Apache Web Server is by far the most popular and widely used Web Server program in the world today with more than 60 percent of the Server market and at least 10.6 million installations worldwide.



Q: What platforms does mod_gzip run on?     [Return to the index]

Just about anything.

Since mod_gzip is simply a standard Apache Web Server module then it runs/works on any platform supported by the Apache Web Server itself.

Apache with mod_gzip runs on all popular Server platforms ( including Win 9x,NT,2000, Linux, FreeBSD, UNIX, etc. ).



Q: Is mod_gzip a 'Proxy Server'?     [Return to the index]

No.

mod_gzip is a standard Apache Web Server module and becomes 'part' of the the Apache Web Server itself.



Q: Do I need ANY 'extra' Client-side software to use mod_gzip?     [Return to the index]

No.

mod_gzip does NOT require ANY 'extra' software to be installed on the Client side. There is no 'Plug-in' or 'Client Proxy' of any kind. All you need is your current HTTP 1.1 compliant browser. All modern browsers released sinced early 1999 are already capable of receiving compressed Internet content via standard IETF Content Encoding if they are HTTP 1.1 compliant.

There are a number of commercial products available that call themselves Internet or Network accelerators which are actually using nothing more than the same publicly available techniques to reduce the content. Most of these still require unnecessary client side Plug-ins or Proxy Servers.

mod_gzip is similar to any commercial product available and in most cases out-performs the commercial products that are simply using public domain GZIP and 'deflate' compression methods and the published IETF Content Encoding standards.



Q: How does mod_gzip actually reduce the HTTP content?     [Return to the index]

mod_gzip for the Apache Web Server is using the well established and publicly available IETF ( Internet Engineering Task Force ) Content-Encoding standards in conjunction with publicy available GZIP compression libraries such as ZLIB ( Copyright © 1995-1998 Jean-loup Gailly and Mark Adler ) to deliver dynamically compressed content 'on the fly' to any browser or user-agent that is capable of receiving it.

mod_gzip also automatically takes care of any situations where requests are being made by a browser or other HTTP user-agent that is not HTTP 1.1 compliant and is incapable of receiving IETF Content-Encoding ( or any other kind of encoding or compression ). In those cases, mod_gzip will simply either use other methods to optimize the content as best as it can for the non-HTTTP 1.1 compliant requestor or will simply return the response(s) 'untouched'.

More advanced versions of mod_gzip contain compression and content reduction methods that are much more sophisticated than simple IETF Content-Encoding and provide levels of performance that are impossible to achieve using simple public domain GZIP or IETF Content-Encoding techniques.

Whereas standard GZIP compression is typically only able to provide a certain low average level of compression, RCI has other methods and algorithms ( some patented and others patent-pending ) for compressing Internet content that can consistently provide better than 94 percent compression on any HTML, XML, WML or text based data stream(s).



Q: What is IETF Content-Encoding?     [Return to the index]

In a nutshell... it is simply a publicly defined way to compress HTTP content being transferred from Web Servers down to Browsers using nothing more than public domain compression algorithms that are freely available.

"Content-Encoding" and "Transfer-Encoding" are both clearly defined in the public IETF Internet RFC's that govern the development and improvement of the HTTP protocol which is the 'language' of the World Wide Web itself. See [   Related Links   ].

"Content-Encoding" was meant to apply to methods of encoding and/or compression that have been already applied to documents BEFORE they are requested. This is also known as 'pre-compressing pages'. The concept never really caught on because of the complex file maintenance burden it represents and there are few Internet sites that use pre-compressed pages of any description.

"Transfer-Encoding" was meant to apply to methods of encoding and/or compression used DURING the actual transmission of the data itself.

In modern practice, however, and for all intents and purposes, the 2 are now one and the same.

Since most HTTP content from major online sites is now dynamically generated the line has blurred between what is happening BEFORE a document is requested and WHILE it is being transmitted. Essentially, a dynamically generated HTML page doesn't even exist until someone asks for it so the original concept of all pages being 'static' and already present on the disk has quickly become an 'older' concept and the originally defined black-and-white line of separation between "Content-Encoding" and "Transfer-Encoding" has simply turned into a rather pale shade of gray.

Unfortunately, the ability for any modern Web or Proxy Server to supply 'Transfer-Encoding' in the form of compression is even less available than the spotty support for 'Content-Encoding'.

Suffice it to say that regardless of the 2 different publicly defined 'Encoding' specifications, if the goal is to compress the requested content ( static or dynamically generated ) it really doesn't matter which of the 2 publicly defined 'Encoding' methods is used... the result is still the same. The user receives far fewer bytes than normal and everything is happening much faster on the client side.

The publicly defined exchange goes like this...

1. A Browser that is capable of receiving compressed content indicates this in all of its requests for documents by supplying the following request header field when it asks for something...

Accept-Encoding: gzip, compress

2. When the Web Server sees that request field then it knows that the browser is able to receive compressed data in one of only 2 formats... either standard GZIP or the UNIX 'compress' format. It is up to the Server whether it will compress the response data using either one of those methods ( if it is even capable of doing so ).

3. If a static compressed version of the requested document is found sitting on the Web Server's hard drive which matches one of the formats the browser says it can handle then the Server can simply choose to send the pre-compressed version of the document instead of the MUCH larger uncompressed original.

4. If no static document is found on the disk which matches any of the compressed formats the browser is saying it can 'Accept' then the Server can now either choose to just send the original uncompressed version of the document OR make some attempt to compress it in 'real-time' and send the newly compressed and MUCH smaller version back to the browser.

Most popular Web Servers are still unable to do this final step.

The Apache Web Server has 66 percent of the Web Server market and it still incapable of providing any real-time compression of requested documents even though all modern browsers have been requesting them and capable of receiving them for more than 2 years.

Microsoft's Internet Information Server is equally deficient. If it finds a pre-compressed version of a requested document it might send it but has no real-time compression capability.

IBM's WebSphere Server has some limited support for real-time compression but it has 'appeared' and 'disappeared' in various release versions of WebSphere.

The VERY popular SQUID Proxy-Caching Server from NLANR also has no dynamic compression capabilities even though it is the de-facto standard Proxy-Caching softtware used just about everywhere on the Internet.

The original designers of the HTTP protocol really did not forsee the current reality whereby so many people would be using the protocol that every single byte would count. The heavy use of pre-compressed graphics formats such as .GIF on the Internet and the relative inability to reduce the graphics content any further than the native format itself makes it even MORE important that all other exchange formats be optimized as much as possible.

The same designers also did not forsee the current reality where MOST HTTP content from major online vendors is generated DYNAMICALLY and so there really is no chance for there to ever be a 'static' compressed version of the requested document(s).

Public IETF Content-Encoding is still not a 'complete' specification for the reduction of Internet content but it DOES WORK and the performance benefits achieved by using it are both obvious and dramatic.



Q: How can I tell if my browser is able to receive IETF Content Encodings?     [Return to the index]

If your user-agent (browser) is adding Accept-encoding: gzip to any GET request that it sends then it is trying to indicate to a Web Server that it is capable of receiving IETF Content Encodings. Whatever encoding schemes a user-agent (browser) is able to receive will be listed after the colon on the Accept-encoding: request header line.

If you don't know how to 'see' what your user-agent (browser) is sending there is an easy way to tell.

RCI maintains an online Connection speed test link that will tell you exactly what your browser is sending, what it is capable of receiving, and will give you a report on the performance increase you can expect to see over your current connection when you are receiving compressed Web content.

Just go to the following URL to perform the test on whatever connection you choose and whatever user-agent (browser) you want a report on...

http://12.17.228.52:7000/

NOTE: Port 7000 is a valid 'safe' port at 12.17.228.52 but if you are behind a firewall that won't even allow your browser to request anything from any port other than HTTP port 80 then you probably will not be able to run this test. Contact your LAN administrator about allowing access to external ports other than HTTP port 80.

The connection test will begin immediately and will cycle through 4 screens that simply say...

Your connection is being evaluated... X

'X' will be a number that will change from 1 through 4 and then the 'final report' should appear.

That 'final report' will look like this...

Begin: Example connection test report
   

Speed Test Thermometer

Your current IP address is
216.60.210.59
Your connection will support an
actual byte transfer rate of
1924.4  /  8659.9
bytes per second.

15395.3  /  69278.9 Kbps

Your browser is capable
of receiving CHTML®
Content Compression.

Your browser is capable
of receiving GZIP
Content Encoding.

To repeat the test do NOT press RELOAD. Just press the button below...

Kbps Rating  

 

*

*

 

  Bytes per second

T-1  

---

-

-

---

  187500

 

---

-

-

---

  100000

 

---

-

-

---

  50000

 

---

-

-

---

  20000

Full ISDN  

---

-

-

---

  12000

 

---

-

-

---

  9000

 

---

-

-

---

  8400

 

---

-

-

---

  7800

56.6k  

---

-

-

---

  7200

 

---

-

-

---

  6600

 

---

-

-

---

  6000

 

---

-

-

---

  5400

 

---

-

-

---

  4800

 

---

-

-

---

  4200

28.8k  

---

-

-

---

  3600

 

---

-

-

---

  3000

 

---

-

-

---

  2400

14.4k  

---

-

-

---

  1800

 

---

-

-

---

  1200

 

---

-

-

---

  600

Uncompressed  

       

  Compressed


Remote Communications, Inc. @ http://www.RemoteCommunications.com

Header information from your browser...
GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt; TUCOWS)
Host: 12.17.228.52:7000
Connection: Keep-Alive
Cookie: CFTOKEN=56867408; CFID=45639

Test buffer 1 = Yahoo's home page
Test buffer 1 length = 12434 bytes
Total bytes compressed = 2480 bytes
Bytes per second compressed = 8659 bytes
End: Example connection test report

Connection test results explained...

If your browser sent an "Accept-encoding: gzip" field in its GET request then the shaded area to the right of the compression results graph will say...

"Your browser is capable of receiving GZIP Content Encoding."

...and the 'Accept-encoding: gzip" request field should be clearly visible in the 'echo' of your browser's GET request that appears underneath the compression results graph.

If your browser did NOT send an "Accept-encoding: gzip" field in its GET request then the shaded area to the right of the compression results graph will say...

"Your browser is NOT capable of receiving GZIP Content Encoding."

...and the 'Accept-encoding: gzip" request field will not be present anywhere in the 'echo' of your browser's GET request that appears underneath the compression results graph.

As for the rest of the report... it simply shows how fast your line can transfer compressed HTML (BLUE LINE) over your existing connection versus how slow the line is (RED LINE) when it is NOT receiving compressed HTML.

The report will also indicate if your browser is capable of receiving the special CHTML encoding format. See RCI's home page at http://www.RemoteCommunications.com/ for more details about the special CHTML encoding format.



Q: Has mod_gzip been tested?     [Return to the index]

Yes. Extensively.

mod_gzip has been running on RCI's Servers for quite some time and has succesfully fulfilled requests for accelerated content millions of times without error. You can trust it. We do.

We have done some extensive benchmark testing of Apache Web Servers running mod_gzip and in all cases the benchmarking has shown the same results. The overall performance benefits when transmitting compressed HTTP Content are both obvious and dramatic.

In almost all low-speed test cases involving 28.8k dial-up connections mod_gzip was able to turn the 28.8k connection into a virtual ISDN line with a content delivery rate 4 or 5 times higher than normal.

Visit RCI's Website to read more about mod_gzip performance and benchmarking test results.



Q: Can I test mod_gzip with standard benchmarking software?     [Return to the index]

Yes... but please note the following...

MOST standard benchmarking tools are not fully HTTP 1.1 compliant and almost none of them are capable of handling IETF Content encoding.

If you use a standard HTTP benchmarking program that does not include the 'Accept-Encoding: gzip, deflate' request field in the request header then mod_gzip will not ( as per RFC standards ) actually send any compressed data.

mod_gzip will only send compressed data to User-Agents that indicate they are capable of receiving it via the 'Accept-Encoding:' field.

Some benchmarking programs do not supply the 'Accept-Encoding:' request field by default but do allow you to add it yourself via a command line parameter or special configuration file.

Check the documentation for the benchmarking program itself.

Everything will still work without the 'Accept-Encoding:' field in the request but the benchmarking won't tell you much since it won't actually be receiving anything compressed.

If you need a benchmarking or testing tool to measure the compression performance on your system and you don't have one that is capable of doing so... just contact RCI. We have our own custom versions of just about all major load generating and HTTP benchmarking tools that are capable of requesting and receiving standard IETF Content encoding(s).



Q: Where can I get an HTTP 1.1 compliant benchmarking tool for Apache?     [Return to the index]

RCI has added full HTTP 1.1 compliance and Content-decoding capability to the industry standard ApacheBench benchmarking tool that comes with the Apache Web Server. This enhanced version of ApacheBench has ( along with mod_gzip ) been donated to the Apache Software Foundation but can also be downloaded directly using the following link...

RCI's Enhanced version of ApacheBench benchmarking tool.

If you need a benchmarking or testing tool other than ApacheBench to measure the compression performance on your system and you don't have one that is capable of doing so... just contact RCI. We have our own custom versions of just about all major load generating and HTTP benchmarking tools that are capable of requesting and receiving standard IETF Content encoding(s).



Q: Am I losing any actual content when using mod_gzip?     [Return to the index]

No.

mod_gzip is a 'lossless' content acceleration module. There is no loss of real information during the optimization process that takes place. The only thing that happens is that the requested content ends up arriving much faster.



Q: Does mod_gzip have any HTML based information screens?     [Return to the index]

Yes.

All versions of mod_gzip have internal HTTP-based information screens that can be accessed by an administrator from anywhere on the Internet using nothing more than a standard browser. The screens can provide immediate and up-to-date information about the 'health' of the module and performance summaries.

mod_gzip version 1.3.14 commands

Use any browser to send the following commands to any Apache Web Server that has mod_gzip version 1.3.14 installed...

http://www.yourserver.com/mod_gzip_command_version

Displays only the mod_gzip version number and simply reports the fact that mod_gzip is available on the Server...

The command will display the following in your browser...
mod_gzip is available on this Server
mod_gzip version = 1.3.14
http://www.yourserver.com/mod_gzip_command_showstats

Shows some basic information about the last few requests that have been processed by mod_gzip and the rate of compression achieved.

The command will display something like this in your browser...
mod_gzip_command_showstats seen...
mod_gzip version = 1.3.14
mod_gzip_total_commands_received = 1
mod_gzip_total_requests_received = 2
mod_gzip_total_requests_declined = 0
mod_gzip_total_requests_processed = 2
mod_gzip_total_bytes_processed_raw = 680633
mod_gzip_total_bytes_processed_compressed = 49771
mod_gzip_total_bytes_saved_using_compression = 630862
compression_ratio = 93 (percent)
http://www.yourserver.com/mod_gzip_command_resetstats

Resets the internal 'statistics' to all ZEROES again without having to restart the Apache Web Server.

If mod_gzip is not installed on the Server that the commands are being sent to then you should simply receive a 404 Not Found error message in your browser.

The mod_gzip command interface is still evolving. Always check the documentation that comes with mod_gzip for full details and/or updates about the information screens available in that particular version of the program. The source code itself is also the best place to look. All the available commands are always fully documented in the source code.



Q: Can the accelerated content be cached?     [Return to the index]

It depends.

It makes no sense to ever attempt to cache dynamically generated content that will be different every time the object is requested and, these days, that accounts for a very large percentage of the traffic coming from most major online sites. In many cases, major online sites are now simply ALWAYS supplying slightly 'different' versions of their page(s) to each and every requestor since they have 'round robin' agreements in place with their advertisers which will insert different advertisements onto each and every copy of the pages each and every time they are requested.

Search engines ( like Yahoo! and Alta-Vista ) are perfectly good examples of this. The odds that any 2 search result pages generated dynamically as the result of a new query will be identical and that any benefit will be seen by trying to cache the previous search results are slim to none.

Likewise for just about all e-commerce transactions. The account information and form data being exchanged has very little chance of being 'the same' for any 2 people using the interface so caching these kinds of e-commerce sites is usually pointless. Online catalogs or product listings can be the exception but even they change so often ( sometimes moment to moment ) that very little benefit is seen from attempting to cache these pages.

It all comes down to TIME. If the overhead required to store copies of pages in a local cache and to perform the complicated logic to determine where the 'freshest' copy of a page resides takes more time to execute than it would to simply fast-compress the object and send it on its way then obviously the caching itself is simply a waste of time.

Our own testing has shown that when you take into account the high overhead of implementing any caching scheme and you combine it with the additional complexities of handling compressed versions of the normally uncompressed source objects, the real benefits of caching become hard to detect and/or justify.

The reality is this. Most HTTP objects that will provide the greatest speed benefit by being compressed before they are delivered to the requesting user-agent can be compressed in memory and forwarded to the requestor much faster than they can actually be verified as 'fresh' in a local cache and retrieved from disk. The average HTTP entity is around 30k and, when using any modern CPU and a robust compression algorithm, can be fast-compressed multiple times before it could be located and retrieved from a local disk cache even once. These verifiable test results generally negate the entire need to store compressed versions of objects. Factor in the additional reality that most content is being dynamically generated at all times and the need for the storage/retrieval of compressed versions of requested objects to/from a local cache becomes hard to justify.

RCI's own research in this area is ongoing at this time the next version of mod_gzip will, in fact, include the ability to store copies of compressed objects to a special local compressed object cache, but this initial version of mod_gzip has no built-in compressed object caching of its own.



Q: How do I report a problem with mod_gzip?     [Return to the index]

If you find a bug in the program or would simply like more information about mod_gzip please contact us. mod_gzip is still evolving so your input is appreciated and desired. Tell us what you need or want.

Problem reports will be addressed immediately.



Q: What about Remote Communications, the Company?     [Return to the index]

Remote Communications, Inc. is an industry leader in compression technology and Internet content delivery to both wired and wireless devices. Please visit our Home page to read more about other products available from RCI and please don't hesitate to contact us with any questions you may have about accelerating and improving the Internet experience for yourself or your users.

mod_gzip is just one of the many HyperSpace® enabled products available from Remote Communications, Inc.



Q: How do I find out more?     [Return to the index]

Please visit RCI's Home page to read more about mod_gzip and other (free) products available from RCI

Please send all EMAIL corespondence regarding mod_gzip to: info@RemoteCommunications.com



Q: What about related links?     [Return to the index]

Here are some additional links that will take you to sites that will provide you with more detail about the mod_gzip and the methods it uses to accelerate content...

Remote Communications, Inc. - Home page

Remote Communications, Inc. - Products page

Remote Communications, Inc. - HyperSpace® Product page

Remote Communications, Inc. - RCTPD® Product page

The Apache Software Foundation (ASF) Home page

ZLIB public domain compression libraries - Home page

GZIP public domain compression - Home page

RFC 1950 - Public 'ZLIB' specification, revision 3.3

RFC 1951 - Public 'deflate' specification, revision 1.3

RFC 2616 - The offcial HTTP 1.1 Protocol Specification

Internet Engineering Task Force (IETF) - Home page

World Wide Web Consortium Internet Standards - Home page

Official comp.compression online Compression FAQ




Q: How do I mod_gzip to my existing Apache Web Server?     [Return to the index]

It's very simple.

mod_gzip is just a standard Apache 'plug in' module and is loaded the same way as any other Apache module. Apache uses different naming conventions for modules on Windows versus UNIX platforms and the examples below show the differences.

In both examples below (Server root) simply means the actual pathname of the location used as your Server root directory.

For the Windows version of Apache . . .

Just copy a pre-compiled binary version of ApacheModuleGzip.dll to your Apache (Server root)\modules directory and then edit your existing (Server root)\conf\httpd.conf Apache configuration file and add the following line to the # LoadModules section...

LoadModule gzip_module modules/ApacheModuleGzip.dll

Please note that even though Windows uses a backwards slash as a directory path separator it is still OK to use the UNIX forward slash in the httpd.conf LoadModule entry. The Apache Server will know that the module you are referring to is actually (Server root)\modules\ApacheModuleGzip.dll.

For any UNIX based version of Apache . . .

Just copy a pre-compiled binary version of mod_gzip.so to your Apache (Server root)/modules directory and then edit your existing (Server root)/conf/httpd.conf Apache configuration file and add the following line to the # LoadModules section...

LoadModule gzip_module modules/mod_gzip.so

That's all there is to it.

Restart your Apache Web Server and it will now be capable of delivering accelerated content back to any HTTP 1.1 compliant browser. If Apache cannot locate the mod_gzip module when it starts it will say so. If all goes well you should not get any errors or warnings whatsoever. The new mod_gzip module will automatically 'kick in' as you begin requesting documents from this particular copy of Apache.

Compiling mod_gzip directly into Apache. . .

As with any standard Apache module, mod_gzip can, of course, be compiled directly into the Apache Web Server itself as a 'core module'.

To do that you may need a complete copy of the Apache Web Server source code ( available from http://www.apache.org ) and you will need to follow the instructions in the README and INSTALL files that come with the Apache source code for adding modules directly to the Apache compile process. See the next section if you only have a binary distribution of Apache.

You will also need a copy of the complete source code for mod_gzip itself which is available at the following location...

mod_gzip HOME page

You will NOT need anything other than the standard Apache source code and the code for mod_gzip itself. mod_gzip already contains all the compression code that you need inside of itself and uses no external compression algorithms or libraries.

I only have binary distribution of Apache. Can I still re-compile mod_gzip.c?

Please see the following separate Q/A section on how to recompile mod_gzip.c without having to obtain all of the Apache source code...

Q: How do I compile a new version of mod_gzip.c for my platform?



Q: How do I add compression statistics to my Apache log files?     [Return to the index]

Simply configure your Apache Web Server to add the compression information to any of your standard Apache log files.

mod_gzip uses the standard Apache 'notes' facility to allow anyone to add transaction compression results to their own CLF ( Common Log Format ) Apache log file output.

The following is taken directly from a working httpd.conf Apache configuration file and it explains how to adjust your httpd.conf file to include compression information in standard Apache log files.

You may simply 'cut and paste' the relevant entries into your own Apache http.conf configuration file.

# The following directives define some format nicknames for use with
# a CustomLog directive (see below).

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

# mod_gzip log formats...

# mod_gzip makes a number of statistical items for each transaction
# available through the use of Apache's 'LogFormat' directives which
# can be specified in the httpd.conf Apache config file

# mod_gzip uses the standard NOTES interface to allow compression
# information t be added to the standard Apache log files.

# Standard NOTES may be added to Apache logs using the following syntax
# in any LogFormat directive...
# * %...{Foobar}n:  The contents of note "Foobar" from another module.

# Additional notes about logging compression information...

# The Apache LogFormat directive is unable to actually display
# the 'percent' symbol since it is used exclusively as a 'pickup'
# character in the formatting string and cannot be 'escaped' so
# all logging of compression ratios cannot use the PERCENT symbol.
# Use ASCII 'pct.' designation instead for all PERCENTAGE values.

# Example: This will display the compression ratio percentage along
# with the standard CLF ( Common Log Format ) information...

# Available 'mod_gzip' compression information 'notes'...
#
# %{mod_gzip_result}n - A short 'result' message. Could be OK or DECLINED, etc.
# %{mod_gzip_input_size}n - The size ( in bytes ) of the requested object.
# %{mod_gzip_output_size}n - The size ( in bytes ) of the compressed version.
# %{mod_gzip_compression_ration}n - The compression rate achieved.

LogFormat "%h %l %u %t \"%r\" %>s %b mod_gzip: %{mod_gzip_compression_ratio}npct." common_with_mod_gzip_info1
LogFormat "%h %l %u %t \"%r\" %>s %b mod_gzip: %{mod_gzip_result}n In:%{mod_gzip_input_size}n Out:%{mod_gzip_output_size}n:%{mod_gzip_compression_ratio}npct." common_with_mod_gzip_info2

# If you create your own custom 'LogFormat' lines don't forget that
# the entire LogFormat line must be encased in quote marks or you
# won't get the right results. The visible effect of there not being
# and end-quote on a LogFormat line is that the NAME you are choosing
# for the LogFormat line is the only thing that will appear in the
# log file that tries to use the unbalanced line.

# Also... when using the %{mod_gzip_xxxxx}n note references in your
# LogFormat line don't forget to add the lowercase letter 'n' after
# the closing bracket to indicate that this is a module 'note' value.

# Once a LogFormat directive has been added to your httpd.conf file
# which displays whatever level of compression information desired
# simply use the 'name' associated with that LogFormat line in
# the 'CustomLog' directive for 'access.log'.

# Example: The line below simply changes the default access.log format
# for Apache to the special mog_gzip information record defined above...
# CustomLog logs/access.log common
CustomLog logs/access.log common_with_mod_gzip_info2

# Using the 'common_with_mod_gzip_info1' LogFormat line for Apache's
# normal access.log file produces the following results in the access.log
# file when a gigantic 679,188 byte online CD music collection HTML
# document called 'music.htm' is requested and the Server delivers the
# file via mod_gzip compressed 93 percent down to only 48,951 bytes...

# 216.20.10.1 [12/Oct...] "GET /music.htm HTTP/1.1" 200 48951 mod_gzip: 93pct.

# The line below shows what will appear in the Apache access.log file
# if the more detailed 'common_with_mod_gzip_info2' LogFormat line is used.
# The line has been intentionally 'wrapped' for better display below
# but would normally appear as a single line entry in access.log.

# 216.20.10.1 [12/Oct...] "GET /music.htm HTTP/1.1" 200 48951
#                          mod_gzip: OK In:679188 Out:48951:93pct.

# The 'OK' result string shows that the compression was successful.
# The 'In:' value is the size (in bytes) of the requested file and
# the 'Out:' value is the size (in bytes) after compression followed
# by a colon and a number showing that the document was compressed
# 93 percent before being returned to the user.




Q: How do I get mod_gzip to only compress files from certain directories?     [Return to the index]

Just use the Apache AddHandler configuration directive.

You can put the following into any directory specific Apache .htaccess file to get all .html files in that directory compressed when they are requested by an HTTP 1.1 compliant browser...

AddHandler gzip_module .html

You can also get creative with Apache's Location directive.

See the Apache documentation about using standard modules with the AddHandler and Location configuration directives.




Q: How do I compile a new version of mod_gzip.c for my platform?     [Return to the index]

It is actually very easy.

I only have a binary distribution of Apache...

If all you have is a binary distribution of Apache you SHOULD still have a program in your binary directory called apxs which will allow you to compile any external Apache module code outside of the Apache source code tree and 'install' the resulting binary in the right place.

If you have apxs then just copy mod_gzip.c to the same directory where apxs resides and issue the following command...

apxs -i -a -c mod_gzip.c

That is all you should have to do.

The command SHOULD compile AND install mod_gzip properly.

If you receive any platform-specific compile errors then please let us know by sending us a mail message at info@RemoteCommunications.com and we will resolve the problem as soon as possible. Please include all the appropriate information regarding exactly what Operating System and Compiler you are using in addition to the exact error information itself.

I already have a binary and source code distribution of Apache...

You may still simply use the apxs utility program mentioned above to compile and install mod_gzip.c but you may also compile it directly into the Apache Server itself rather than use it as an 'external module'.

To do that... just follow the instructions in the ../src/modules/example README file about creating a new directory for mod_gzip under the ../modules tree and including mod_gzip.c directly into the Apache core server.



Credits...

RCTP® is a registered trademark of Remote Communications, Inc.

ZLIB Copyright © 1995-1998 Jean-loup Gailly and Mark Adler .

HyperSpace® and 'Powered by HyperSpace®' are registered trademarks of Remote Communications, Inc.

RCI's HyperSpace Logo

Other product names and logos are trademarked by their respective owners.


[   Return to the FAQ index   ]     [   Download mod_gzip   ]