Do I use Azure Table Storage or SQL Azure for our

2019-04-21 06:28发布

问题:

We are about to implement the Read portion of our CQRS system in-house with the goal being to vastly improve our read performance. Currently our reads are conducted through a web service which runs a Linq-to-SQL query against normalised data, involving some degree of deserialization from an SQL Azure database.

The simplified structure of our data is:

  • User
  • Conversation (Grouping of Messages to the same recipients)
  • Message
  • Recipients (Set of Users)

I want to move this into a denormalized state, so that when a user requests to see a feed of messages it reads from EITHER:

A denormalized representation held in Azure Table Storage

  • UserID as the PartitionKey
  • ConversationID as the RowKey
  • Any volatile data prone to change stored as entities
  • The messages serialized as JSON in an entity
  • The recipients of said messages serialized as JSON in an entity
  • The main problem with this the limited size of a row in Table Storage (960KB)
  • Also any queries on the "volatile data" columns will be slow as they aren't part of the key

A normalized representation held in Azure Table Storage

  • Different table for Conversation details, Messages and Recipients
  • Partition keys for message and recipients stored on the Conversation table.
  • Bar that; this follows the same structure as above
  • Gets around the maximum row size issue
  • But will the normalized state reduce the performance gains of a denormalized table?

OR

A denormalized representation held in SQL Azure

  • UserID & ConversationID held as a composite primary key
  • Any volatile data prone to change stored in separate columns
  • The messages serialized as JSON in a column
  • The recipients of said messages serialized as JSON in an column
  • Greatest flexibility for indexing and the structure of the denormalized data
  • Much slower performance than Table Storage queries

What I'm asking is whether anyone has any experience implementing a denormalized structure in Table Storage or SQL Azure, which would you choose? Or is there a better approach I've missed?

My gut says the normalized (At least to some extent) data in Table Storage would be the way to go; however I am worried it will reduce the performance gains to conduct 3 queries in order to grab all the data for a user.

回答1:

Your primary driver for considering Azure Tables is to vastly improve read performance, and in your scenario using SQL Azure is "much slower" according to your last point under "A denormalized representation held in SQL Azure". I personally find this very surprising for a few reasons and would ask for detailed analysis on how this claim was made. My default position would be that under most instances, SQL Azure would be much faster.

Here are some reasons for my skepticism of the claim:

  • SQL Azure uses the native/efficient TDS protocol to return data; Azure Tables use JSON format, which is more verbose
  • Joins / Filters in SQL Azure will be very fast as long as you are using primary keys or have indexes in SQL Azure; Azure Tables do not have indexes and joins must be performed client side
  • Limitations in the number of records returned by Azure Tables (1,000 records at a time) means you need to implement multiple roundtrips to fetch many records

Although you can fake indexes in Azure Tables by creating additional tables that hold a custom-built index, you own the responsibility of maintaining that index, which will slow your operations and possibly create orphan scenarios if you are not careful.

Last but not least, using Azure Tables usually makes sense when you are trying to reduce your storage costs (it is cheaper than SQL Azure) and when you need more storage than what SQL Azure can offer (although you can now use Federations to break the single database maximum storage limitation). For example, if you need to store 1 billion customer records, using Azure Table may make sense. But using Azure Tables for increase speed alone is rather suspicious in my mind.

If I were in your shoes I would question that claim very hard and make sure you have expert SQL development skills on staff that can demonstrate you are reaching performance bottlenecks inherent of SQL Server/SQL Azure before changing your architecture entirely.

In addition, I would define what your performance objectives are. Are you looking at 100x faster access times? Did you consider caching instead? Are you using indexing properly in your database?

My 2 cents... :)