SQL Server – No Arrays? No Problem!

Just a couple weeks ago I was pointing to the lack of a LIMIT keyword in SQL Server 2005. Last week, I was reminded about the lack of arrays.  I have a hard time believing that a platform/language/framework/anything aimed at developers in this day and age doesn’t support arrays, but we’ll assume the SQL Server team had more important things to do the past 6 years.

You’ll run into the array-wall frequently while doing SQL Server programming, but to me, the most obvious example is when dealing with a CheakBoxList. The CheckBoxList is a perfect example; because it’s typically varying length (the user could pick any number of values). Take for example a blogging system that support “tags”. An author could pick 0 or more tags and then hit the “Post” button.

private sub Post_Click(o as object, e as EventArg)
  dim post as new Post()
  post.Subject = Subject.Text
  post.Body = body.Text
  foreach item as ListItem in Tags.Items
    if (item.Selected) then
      post.Tags.Add(new Tag(item.Name, item.Value))
    end if
  ‘do something nice here…
end sub

You’ll eventually end up into a method that looks like:

friend sub
SavePost(post as Post)
using connection as new SqlConnectio(GET_FROM_CONFIG)
using command as new SqlCommand()
       ‘now what?
    end using
  end using
end sub

It’d be sweet if we could pass in an array of TagIds, like:

command.Parameters.Add(“@TagIds”, SqlDbType.Int()).Value = post.TagIds

But we can’t.  One solution might be to write some dynamic SQL…but that’s no fun to maintain. Another is to hit the database multiple times:

foreach tag as Tag in post.Tags
   SaveTag(post.Id, tag.Id)

This isn’t a bad idea, and can even be done in a single connection.Open(). I prefer a third solution though. In SQL Server 2000, I would pass in a comma-delimited string and convert it into a table, something like:

  @Tags VARCHAR(2000)

   DECLARE @tagIds Table
   SELECT @tagIds = dbo.CsvToInt(@Tags)
   DECLARE @postId INT
      SELECT @postId, [value]
         FORM @tagIds


Here’s what my CsvToInt function might look like:
CREATE  Function dbo.CsvToInt
   @Array VARCHAR(2000)
RETURNS @IntTable TABLE (IntValue INT)

  IF @Array <> ” BEGIN
    DECLARE @separator char(1)
    SET @separator = ‘,’

    DECLARE @separator_position INT
    DECLARE @array_value VARCHAR(2000)

    SET @array = @array + ‘,’
    WHILE patindex(‘%,%’ , @array) <> 0 BEGIN
      SELECT @separator_position =  patindex(‘%,%’ , @array)
      SELECT @array_value = LEFT(@array, @separator_position – 1)
      INSERT @IntTable Values (CAST(@array_value AS INT))
      SELECT @array = stuff(@array, 1, @separator_position, ”)

This solution has a number of limitations. The first being the length which might prove problematic in some situations. Worse though is that it only really works for comma delimited integers. What happens if you want to pass in a more complexe structure, like a name=>value collection.

Well, with SQL Server 2005, things have gotten a lot better. Instead of passing in a CSV, we can leverage the SQL Server’s XML capabilities.  Changing the paramters of what we are trying to do (from a simple csv, to a key=>name bulk insert), we first need to convert our collection into some XML:

friend shared function NameValueToXml(data as NameValueCollection) as string
  dim sb as new StringBuilder(“<data>“)
  foreach key as string in data
    sb.AppendFormat(“<key>{0}</key>“, key)
end function

Which means we’ll end up with a string that looks like:
    <key>Some Key</key>
    <value>Some Value</value>
    <key>Some Key 1</key>
    <value>Some Value 1</value>
    <key>Some Key 2</key>
    <value>Some Value 2</value>

Next, we pass the value into stored procedure:

command.Parameters.Add(“@Tags”, SqlDbType.Xml).Value = NameValueToXml(post.Tags)

And finally, we store the values into our table:

  @Tags XML


      SELECT ItemData.row.value(‘key[1]‘, ‘varchar(200)’),
             ItemData.row.value(‘value[1]‘, ‘varchar(200)’)
         FROM @Tags.nodes(‘/data/meta’) ItemData(row)


The syntax isn’t the most obvious. In my mind, it creates a node at ItemData.row for each child of /data/meta which can be access via the value method. Note that you can also access attributes by using the @name syntax.

This entry was posted in Grab a coffee before reading. Bookmark the permalink. Follow any comments here with the RSS feed for this post.

11 Responses to SQL Server – No Arrays? No Problem!

  1. Sushant says:

    I think we can implement a betetr code…. the 1 which i had seen on http://www.sommarskog.se/arrays-in-sql.html#iter-list-of-integers
    works good for even characters.

  2. Haacked says:

    I agree with piers in that parsing comma-separated string is not well suited to a set-based language like SQL.

    But wouldn’t being able to pass in a set be ideally suited to a set-based language? Having to pass in XML seems like overkill just to pass in a set of integers to a set based language.

    Unfortunately, there’s not much else you can do other than call the same sproc over and over.

  3. karl says:

    I don’t think this logic belongs in the middle tier. It’s very specific to a particular vendor – SQL Server. Perhaps this is classical impedance, but it seems to me that it isn’t up to the business layer to worry about the workings of the database layer – why should it care what SQL server does and doesn’t support (let alone try to solve the problem). We might just have two different points of view on this, and I would quite welcome any enlightement you can provide. To me, this is the prime responsability of the data access and data layers..

  4. piers7 says:

    Parsing a comma-seperated string in SQL is the canonical example of what *not* to do at the database layer, because it’s the canonical example of what *not* to do in a set-based language like SQL.

    Now true you can get round this using the XML syntax, and/or using SQL-CLR, but you’ve really got to ask yourself should this logic not sit in the middle tier?

    “…assume the SQL Server team had more important things to do the past 6 years”

    Yeah, core database functionality, rather than replicating what’s best done elsewhere.

  5. If you’re building an XML string from user entered input, you probably do not want to be using StringBuilder and String.Format to pull it together.

  6. Jon Galloway says:

    I’d love to see a SQLCLR JSON parser. JSON’s arrays are much more lightweight than XML, which makes a difference when pushing thousands of rows.

    Oren (Ayende) has an interesting solution – use SqlBulkCopy to push an entire table over, then join onto it:

  7. johnpapa says:

    Yeah, you could use the SQL CLR, but its somewhat overkill just to pass an array, IMO. I like the XML fragment approach using the new XQuery extensions in SQL 2005 (as you showed in your post, Karl). I think the XML aproach is the cleanest of the dirty shirts.

  8. karl says:

    Yes, I thought about the SQLCLR also. I’m not there at all (ie, 0 experience with it). If anyone wanted to provide an example for completeness it’d be welcome.

  9. sahilmalik says:

    Well you could do this using OPENXML in SQL2k also. So nothing new there, but OPENXML had a few downsides.

    Another cool thing you could do in SQL2k5, is use SQLCLR to send in a serialized object as a bytearray – which is then deserialized inside SQL Server. In most situations the extra memory overhead won’t balance out the peformance gain you get, but it could be a good solution where you are not fully sure of the parameters being passed in, when the code is written.

    There is IMO, still no good solution for passing in arrays. In Oracle you can get around this problem but SQL Server still needs these workarounds.

  10. jfranco says:

    Couldn’t this also be accomplished in SQL 2000?? I’ve been doing this for years using the OPENXML function.