Using the new rules configuration in UrlScan v3.0 Beta (Part 2)

Dissecting the SQL injection sample in the walkthrough

I will spend some time dissecting the SQL injection rule posted in the walkthrough for UrlScan. Before I do so, I want to re-iterate the fact that SQL injection is a web application issue, and hence the right place to fix it is in the web application. Sometimes when you are the victim of a SQL storm, it is less than ideal to go figure out all the places your web application might be susceptible. That's where UrlScan comes in and offers a stop gap solution till you can fix the apps, without taking any downtime hit on your site. The one issue here is that of false positives ... and these are hard to predict because different web applications have different requirements and semantics. Nonetheless, UrlScan can offer substantial protection in the face of a SQL Storm at the cost of a some false positives that will cause valid requests to be rejected.

[SQL Injection]
AppliesTo=.asp,.aspx
DenyDataSection=SQL Injection Strings
ScanUrl=0
ScanAllRaw=0
ScanQueryString=1
ScanHeaders=

[SQL Injection Strings]
--
%3b ; a semicolon
/*
@ ; also catches @@
char ; also catches nchar and varchar
alter
begin
cast
create
cursor
declare
delete
drop
end
exec ; also catches execute
fetch
insert
kill
open
select
sys ; also catches sysobjects and syscolumns
table
update

So this is the first bit. Notice that the only thing we are scanning here is the query string, not the URL or any headers. This will give us a little more leeway with our strings list. But even so, there are a lot of chances for false positives. For example if were to have "podcast" in my query string, I would trip the filter because of "cast". So the best thing to do is copy this over and do quick testing to make sure your apps still work. The other thing to do is keep an eye on the log files to see what it is catching.

The obvious gap in the rule above is the fact that the only thing I am checking is the query string. What about the rest of the request? The parts of interest for SQL injection really depend on your web application ... but there are definitely some headers that seem important, like the Cookie header (popular candidate for script injection as well).

[SQL Injection Headers]
AppliesTo=.asp,.aspx
DenyDataSection=SQL Injection Headers Strings
ScanUrl=0
ScanAllRaw=0
ScanQueryString=0
ScanHeaders=Cookie:

[SQL Injection Headers Strings]
--
@ ; also catches @@
alter
cast
convert
create
declare
delete
drop
exec ; also catches execute
fetch
insert
kill
select

For folks who have been following this, you will notice that an older version was looking at ScanAllRaw. Even with a trimmed down list, there were a lot of things breaking. Like /* with the Accept-Encoding header and 'cast' in User-Agent strings that had things like 'broadcast'. So I followed my own advice and reduced the scope a little more.

Another part of the request that folks missed was the request entity, but the explanation for that deviated from this topic sufficiently to warrant its own blog here.

16 Comments

  • Nazim,

    To reduce the false positives I am thinking that we could have more unique terms in the list instead of the more generic terms.

    Let’s take:

    CAST

    We have already identify that this one has potential false positives with podcast, broadcast, etc, etc

    My logic is that cast is a function and the CAST function to be implemented has to be followed by an open bracket/parenthesis.

    (

    So we could replace the more generic CAST as a restricted word with:

    CAST(

    Also I am thinking that you could emulate the ( with some % encoding - %28 - or maybe insert a blank space - %20 – after the CAST and the open bracket/parenthesis. So I would also include:

    CAST%

    Therefore podcast would pass through but a genuine attack would not.

    I would also look at the rest of the restricted words and see if they have similar behaviour for their associated functions.

    We need to make the terms as unqiue as possible to reduce the false positives and maintain the security.

    Is my logic floored here?

    Also I am going to look at any new functions in newer versions of the databases software (eg SQL Server 2008) that emulate the behaviour of the above commands.

    With SQL language branching out to be more platform specific they introduce new commands with new versions so it is possible that other words need to added to the restricted list.

    With this in mind I might be look at creating different rules for different databases platforms one for Oracle 10, SQL 2005, SQL 2008, Mysql 5.x, etc. They all have slightly different functionality and command set.

    Therefore we include only the valid functions for that database platform reducing the need for false positives further.

    Although SQL injections are application based and rightly need to be addressed by the developers, attacks can, and will, be more focused on the platforms there are running on. We have seen this in recent times with attacks on asp (implies attacking IIS servers) and it is well know that there are different attack vectors depending on your SQL version.

    Cheers,

    John

    PS you missed CONVERT of the SQL injection strings :)

  • I agree with Rovastar. Excluding simple terms like "cast" is not realistic. I've analyzed my IIS logs and have come up with these injection sequences.

    %20and%20char(124)%2buser%2bchar(124)
    %20as%20varchar(
    %20varbinary(
    %20cast(is_srvrolemember
    %20table_name%20from%20information_schema.tables
    (select%20top%201%20convert
    select%20*%20from%20sysobjects)
    =cast(0x4400450043004c004100520045
    &cmd=cd%20/tmp;wget%
    %20and%20user%3e0%20and%20
    )%2buser%2bchar(
    %20from%20tbluser
    %20cursor%20for%20select%20
    %20from%20information_schema.columns%20
    %20and%20user%2bchar(
    )%2bdb_name()%2bchar(

  • The problem with getting more specific, like 'cast(' is that I can bypass rules very easily. eg: 'cast (', 'cast%09(' etc. Remember that the purpose of tools like UrlScan is to get yourself out of trouble quickly. If you are looking for a more powerful way to specify rules, I would advise folks to wait for the URL Rewrite module that will be out soon to try out. You can go wild with your regex patterns then :)

  • Nazim,

    Thanks for the reply.

    I understand that and I thought my suggestions covered that. Are you saying that using

    CAST(
    CAST%

    (and lets throw in CAST& for good measure)

    instead of CAST will not work?

    I presumed that the CAST%

    would catch

    cast%09(

    Are you saying it would not?

    I see %09 is a tab I did forget about that but the same principles applies IMHO something else could be inserted between the CAST and the (

    How does

    cast (

    work in a url string?

    I presumed you cannot send a space and would convert it to %20 and then the CAST% would catch it. If not the can we have reserved words/terms with spaces in them like 'CAST '

    Are you saying that URL rewrite will be available for IIS 6? If not, that will be little use to 90% of IIS admin out that at this time.

  • Also I want to say that I am more interested in the logic of this specifics at this stage. I would also have to consider how/if other brackets like [ { etc can be used for the CAST example. I would rather have

    CAST(
    CAST&
    CAST%
    CAST[
    CAST{
    etc, etc
    rather than the generic

    CAST

    Also why do you think that URL cannot be used as a more permanent feature rather than a quick fix? I think with the correct rules this can be a very powerful feature even moreso that the some of the IIS team seem to suggest.

  • Rovastar,
    I am sorry I misunderstood. CAST( and CAST% would definitely catch the examples I provided. But what about CAST/**/( or other inline comments. The point I am trying to make is that there are a lot of nuances to SQL injection. Using UrlScan to detect advanced scenarios is probably not worth the effort ... even though I have to admit it is a nice little puzzle to attempt solving :)
    UrlScan is nice to block automated attacks like those that are prevailing right now ... but you won't be able to stop a determined hacker with it. If I were in your shoes, I would wait for Url Rewriter and use that to write regex rules ... you can do a lot of cool things with that.

  • :) I was aware of the comments one also and I was thinking of blocking /* overall.

    rather than just CAST/

    CA/*blah*/ST( is a pain to protect at the best of times so better to ban the /* OR */ whatever works better ideally this scenerio is where the AND operator could be is nice :)

    URL Rewriter does sound great but it is for 7 only (there is a lot of 6 and even 5 out there - I expect more 5 than 7 :)) and that is a pain for a lot of people.
    But are we not going to get the same sort of problems but just with URL redirect reg ex? How is that scenario any different?

    We need the logic to work first and even if IISteam don't want to use these suggestion to improve URLscan I hopefully these thought process can be useful in detecting SQL injection attacks in the URL via URLredirect and reg ex.

    Like I say when written all these comments and forum post I am more brain dumping info down I know some of the scenario have not been thought through enough that is why I need people like you with the feedback.

  • Rovastar,
    You are absolutely right in the fact that we have to tackle the same problems with rewriter as well. The difference is that the rewriter tool is more flexible and will allow for richer and more complex rules through regular expressions. UrlScan will only be able to do so much. The 'IIS7 only' point is very valid, but I can now use this as my sales pitch for IIS7 :). I am sorry if I am coming across as negative, I am just trying to keep expectations from UrlScan realistic. I don't want people to be under the impression that they can forget about their apps and other layers of their infrastructure and use UrlScan alone to protect their servers. I do appreciate your brain dumps ... so please keep posting :)

  • Sorry Nazim but I disagree. And I will try and explain why.

    Your argument against urlscan over url re-writer is floored.

    a) I don't want to give the impression that URLscan is a magic bullet. I know it is not. It is down to the devs to fix the code. Parametrized queries are atm the best and safest way protect you from attack you have to look at every input, etc for this.

    Analyzing the URL is never going to be the best way of protection, I agree.

    However you are giving the impression that URL rewriter is better then URL scan. They both use the same principles scanning a URL. It is not better and URL rewriter certainly is not a magic bullet situation. You seem to give that impression in your posts - I know you understand that but that is the impression you give.

    b) These techniques are only as good as the rules attached to them.

    A well defined URL Scan will be much better that a poorly defined or even default URL rewriter. (Defined for me in real world scenarios by what it stops AND the valid requests it doesn't reject)

    URL rewriter with exactly the same rules of prohibited words as in your post I am betting will be worse compared to URLScan with some more tweaked rules that I am thinking about.

    I hope there will be better regex rules in URLrewriter. If there are better rules why are these ideas not mentioned already for URLScan?

    What specfic attack cases do you have in mind that URLscan could not cope with and regex would be able to cope? There cannot be many.

    c) Regular expressions are too complex for most people to understand let alone modify.

    RegEx it is a whole language to understand. Whereas URLscan.ini is easy to understand

    It is easy for someone to change a URLscan.ini to omit a certain word which is valid or to add a new one when a new attack technique surfaces.

    Try that with the average admin with a massive RegEx. It is a completely different kettle of fish.

    I used to know RegEx (about 8 years ago since I last used it)

    I can put a load of comments explaining why I included certain terms in URLScan. How do you propose doing that in RegEx?

    I can look at urlscan rules simply it could take hours for even for an experienced RegE expert to understand a complex one fully.

    d) Have you considered what you will put in the logs when something fails?

    It is simple to say for URLScan

    CAST( was detected in in the query string and display the query string

    What are you going to say for a a 1000 character regex failing?

    You must consider this for real world admin use.

    Sorry if I am the one sounds negative now I am just looking for potential problems. ATM even when I do use IIS 7 I will be using URLscan.

    To paraphrase Star Wars - you might say my overconfidence is my weakness however your faith in your friend URLrewriter is yours :)

  • I agree w/ Rovastar - Regex is great but the expressions are a pain unless you keep up on the "language" - URLSCAN has worked for me for years and now w/ v3 it saved my butt w/ this latest round of attacks - I would like to see a few more features in URLSCAN v3 but overall it works great for those of us still maintaining classic ASP before all this craziness started happening -

  • Simply you must know the sql queries running in your app, i. e. I never use any create in my apps runnig for all people, so, I can block create for any sql query (generally it is the used for creating temp tables in a "hand made attack"). With Asprox botnet now, you can be sure than varchar(4000) is a proof af attack and generally there is no a similar clause in any asp/aspx-SQL app. I never use "" characters in any sql query in my app, so current automatic setting in URLscan 3.0 is useful against many sql injection attacks, included writing scripts directly to your database. I. e. Declare is generally used only in generated SQL scripts but none in any application, perhaps is more secure creating tables in database and later integrate they with app. You could create an index of used SQL sentences in your internet app and you'll be sure what "keywords" you could block. URLScan is a very good tool, and losing traffic is more related with a missuse than other fact. You can also "normalize" your page naming scheme and your queries scheme to avoid composite characters or spaces and blocking any white space in: be sure than many of attacks will include any %20, included all hexa queries.

  • Rovastar,
    I get your point, but here is what I am getting at.

    a) UrlRewriter is not a magic bullet either ... however it IS more powerful than UrlScan in its ability to specify complex rules. You can only specify substring matches in UrlScan, but you can do a lot more with UrlRewriter in terms of expressing rules. But I still get your point ... I am coming across as recommending the use of UrlRewriter over UrlScan. This is not my intent ... I believe UrlScan does its job well. There are a few folks who demand a lot of flexibility in their rules ... to them I recommend UrlRewriter. In my opinion the added overhead of maintaining regexs overrules any benefit they provide.

    b,c) I can't really argue here :) Regex is complicated and not exactly readable by mere mortals. UrlRewriter module will come with a bunch of UI to make this task easier ... but not foolproof.

    d) You would put the capture group in your log file for regex.

    All I am saying is that you can't have your cake and eat it too. You can't have a lot of flexibility while still maintaining ease of use. If you are willing to sacrifice the latter for the former, go ahead and use UrlRewriter. Otherwise, stick with UrlScan (I know I do :)).

    Hope that clarifies my position.

  • Does UrlScan also Filter Request.Form attach?

  • UrlScan does not look at request entity and cannot deal with request.form. Check http://blogs.iis.net/nazim/archive/2008/06/30/urlscan-v3-0-filtering-based-on-request-entity.aspx

  • hi there,

    can you please explain why your list is very short now?
    i have a long list now, and i want to cut some from it.
    but i dont know what i can cut. can you help me please? where can i find answers :)

  • nustyle -
    This is just a sample list. I can't grow this because the more I do, the more false positives I would be introducing for the various applications out there. I expect admins to use this as a starting point and then add and remove from it as need be. HTH.

Comments have been disabled for this content.