Wednesday, November 14, 2012

SQL Server: Problems with corrupt files or assemblies

Yesterday I found and fixed a bunch of hard drive issues. One of those issues resulted in my “Microsoft.AnalysisServices” assembly for SQL Server 2012 becoming corrupt. That in turn resulted in VS2010 throwing errors when I tried to use SSDT or do just about anything else. My exact error message was:

Could not load file or assembly 'Microsoft.AnalysisServices, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The module was expected to contain an assembly manifest.

 

There was an MS help link as well, but it basically took me to a page that thanked me for letting them know they needed to work on their documentation. It was a little less than helpful.

 

I tried several things to fix the problem.

  1. Re-apply SP1.
  2. Uninstall just Analysis Services.
  3. Repair SQL 2012
  4. Uninstall SQL 2012  (This mostly worked except I still had issues with SSAS)
  5. Reinstall SQL 2012  (This worked, but still had issues with Analysis Services)

All of the above resulted in the same error message at some point and a non-working install of SSAS 2012.

 

I finally came across an article that talked about fixing the Global Assembly Cache. While not everything applied directly, it did get me started.

  1. I tried copying the gacutil files as mentioned in the article. This didn’t work. The program ran, but didn’t do anything. I had to use the actual location for  gacutil.exe in order to do anything. This can be found in “c:\Program Files (x86)\Microsoft SDKs\Windows\” You’ll need to choose the appropriate version for your OS as well as the appropriate choice for x32 or x64. In my case, it was Windows 8. My full path to gacutil.exe was “C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\x64”
  2. I had to find my physical Microsoft.AnalysisServices.dll file, found under “C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012\x64”
  3. After that, I was able to run gacutil -if “C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012\x64\Microsoft.AnalysisServices.dll”

After running that command, I uninstalled and re-installed SSAS under SQL 2012 using the standard add/remove features and everything is now working. I don’t know if anyone will encounter a similar problem with a bad assembly or assembly manifest, but hopefully this will help someone.

Monday, November 12, 2012

SSDT: Tips, Tricks, and Gotchas

I wanted to add a short post to make sure I highlight some things that will trip people up or otherwise cause issues.

 

  • When setting trigger order for triggers on a table, you could run into an issue with Publishing your database. The project will build successfully, but throw a "Parameter cannot be null" error for parameter "Key". This is an internal bug with the product as of at least SQL 2012 CU2. It's been entered as an internal bug within MS with a workaround. To workaround the issue, do not try to set the trigger order in the table definition, but rather put this code in a post-deploy script.
    • This is supposed to be fixed in a future release of SSDT so your experience may vary.
  • Importing encrypted objects may fail. If you want to store encrypted objects in your project, you'll likely need the code in order to create them in the project.
  • Unlike the Schema Compare option in VS 2010, there is no way to set the default Schema Compare options nor a way to filter out "skipped" objects in the schema compare.
    • I’d love some ideas on how better to handle this. You may be able to save the compare within the project for future re-use, but I had little success with this in VS 2010 Projects so have been reluctant to try that route again.
  • If your database uses FILESTREAM, you will not be able to debug using the default user instance that SSDT provides. You will need to point your debugging instance to a SQL Server install that supports FILESTREAM. See this forum post for more details.
    • Set this in the Project Properties and point to an actual instance of SQL Server that supports the feature(s) you want to use.
  • SQL Projects will substitute CONVERT for CAST in computed columns at the time of this writing. They also push default constraints with extra parentheses around the default value. If you clean these up from the target server, be aware that on your next Publish action, you could end up with extra work being done to make your target database use those parentheses or the CONVERT function.
    • To work around this, do a SQL Schema compare after a release to find any areas where the schema on the server differs from that in the project. For instance you may see a DEFAULT (getdate()) in your server, but have DEFAULT getdate() in your project.  Add the parentheses to your project to avoid unnecessary changes.

 

Do you have any tips to share? Add them to the comments below.

Friday, November 9, 2012

SSDT: SQL Project Snapshots

SSDT allows for snapshots to be taken of the project at any point. Just right-click the project name and select the option to "Snapshot Project".
clip_image001
This builds a dacpac file into a folder within the project called "Snapshots" with a default name format of Projectname_yyyymmdd_hh-mi-ss.dacpac.
This file contains a build of the project at the time the snapshot was taken, including all objects and scripts.
 
Uses (by no means a complete list)
  • Save a specific version of your project to use for release
  • Save this version of the project before making changes to the underlying project
  • Use as a source for schema compare
  • See the differences between snapshots and/or the current project through schema compare
  • Baseline your project
  • Roll back to this state
    • Schema Compare with the snapshot as the source and the Project as the target
    • Import into a new project with this as the source.
    • Import into current project, but be aware that this could easily produce a lot of duplicates




Thursday, November 8, 2012

SSDT: Publishing Your Project

Build the project

In order to successfully publish your project, it must first be able to build successfully.

Start by building your project. Right-click the project and select "Build".

clip_image001

If the build is successful, the project can be published.

 

You may want to create a folder within your project to store saved Publish Profiles. These can be used later to easily publish the project to your servers.

clip_image002

 

Creating Publish Profiles

Right-click the project and select Publish. This will bring up the Publish Database dialog.

  • Choosing to publish or opening a saved publish profile will initiate a build of the project.

clip_image003

Choose your target database, set your advanced options (similar to Schema Compare options), and choose the "Save Profile As" option to save this to a location within your project. Selecting the "Add profile to project" option will create the publish profile in the root of the project. You may wish to either move the file to a folder storing all of your publish profiles or, if you saved it without adding to the project, show all files of the project so you can include the file in the project.

clip_image004

 

Some options you may want to consider:

  • "Always re-create database" - this will re-create the database. Any data in the database will be lost.
  • "Block incremental deployment if data loss might occur" - If there are any changes that could result in the publish action failing because of data loss, this option will stop the script from running.
  • "DROP objects in target but not in project" - This will remove anything in the database that doesn't exist in the project. Useful if you want consistency, but you may want to ensure this isn't checked if there could be objects in the database that were created, but didn't make it to the project.

Under the "Advanced Deployment Options"

  • Allow Incompatible Platform - Useful if you may publish to a different version of SQL Server than the one specified in the project
  • Include transactional scripts - Will run the entire update operation as a transaction. If any one part fails, the transaction will roll back. If you have cross-database dependencies, selecting this option could result in no changes being published if you're publishing to a new server. For a new publication, you may want to de-select this option to ensure a successful deploy of what can be published.
  • Script state checks - This option will ensure that the publish action will only work on the specified server and database.
  • Verify deployment - Checks the database and project before publishing to try to ensure there are no changes that will cause problems with the publication such as missing data for a foreign key.

 

Using Publish Profiles

Once you've set up your publish profiles, you can easily use these to push changes to that server and database without needing to specify additional parameters. The easiest way to use them is to double-click the Publish Profile within the project and choose to either "Generate Script" or "Publish".

Generate Script will generate a script for you to use to update the target at a later time (run in SQLCMD mode).

Publish will immediately attempt to push the changes to the target.

You can also use these at a later point to push changes through the SQLPackage.exe command line.

 

SQLPackage

To publish your package through a command line we use something like the following:

Code Snippet
  1. sqlpackage /a:publish /sf:.\sql\Local\Adventureworks2008.dacpac /pr:.\Publish\Local.publish.xml

The above will:

  • Use the "Publish" Action
  • Use the Source File named Adventureworks2008.dacpac, built in the sql\Local folder
  • Use the publish profile named "Local.publish.xml" (defined to push to the local SQL Server)

You may want to add SQLPackage.exe to your path. By default it is installed in:

C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin

You can override quite a few of the default settings through various command line arguments. This includes source, target, and variables. You can find a full list of the command line arguments at the SQL Package reference online.

 

Jenkins Automation for CI

We use Jenkins at my current workplace and set up a Jenkins job to do the following (With thanks to Matthew Sneeden for the assistance.):

  • Get the latest from our mainline repository
  • Build each SQLProj file.
    • Building the SLN file will result in also attempting to publish the database
    • msbuild .\Adventureworks.sqlproj /t:build /p:Configuration="Local"
      • This assumes that msbuild.exe is in your path.
    • Configuration is mostly to control the location of the dacpac file generated.
  • Run SQLPackage w/ a specified Publish Profile for the appropriate environment and using the newly built dacpac as the source.

We are currently investigating how we can use Snapshot files to better control releases to our UAT and Production environments. This series will be updated when that information is available.

Wednesday, November 7, 2012

SSDT: Errors and Warnings

SSDT includes an Errors and Warnings window that is well worth your attention. Ideally, your project should have no errors or warnings.

clip_image001

 

However, sometimes coding errors slip in to your project or you get warnings that an object can't be found because it exists in another database. Sometimes a warning might appear because an object is missing completely, in this project or another one. In these cases, it's well worth checking this section to find out where you may have some issues.

Warnings are not necessarily a problem. SSDT will bring possible issues to your attention, but warnings will not stop a project from building and publishing unless you have set the option to treat warnings as errors or unless there really is an underlying problem that causes an issue during the publish phase.

For example, if I modify the Person.Address table to add a new column in the code, but forget to add a comma, I'll get an error something like this.

clip_image002

If you double-click on the line, the editor should open the appropriate file and take you pretty close to your problematic line. Correct the problem, save the file, and move on to the next error.

 

Some common warnings/errors

  • 4151 - Unresolved database reference. This is often caused when one of the objects in a database references another. This can often be resolved by creating a database reference.
  • 71562 - Unresolved database reference warning.
  • 71502 - Another unresolved database reference warning.

How to globally suppress certain warnings

  • Right click the root of the project and select properties
  • Click on the "Build" tab
  • Enter in the numeric portion of the codes, separated by commas. Remove the "SQL" and any leading zeroes when entering the code.

clip_image003

Tuesday, November 6, 2012

SSDT: Updating a Project Manually

Sometimes it's necessary to modify the project manually. The change might require specific tweaking to include just a couple of new lines in a stored procedure or function. It might be just adding a column to a table. Maybe you know exactly what change needs to be made and would rather just edit it manually instead of comparing or importing. Whatever the reason, updating your project manually can be done without too much trouble.
If you know the name and location of the file you want to edit, just go straight to it and right-click it.
clip_image001
You have two ways to edit the file - using "View Code" will bring up a T-SQL script. Edit as you would any other T-SQL script and save. Remember that SSDT scripts create all base objects using CREATE scripts.
If you choose to View Designer, you'll see a new screen combining a design view with a T-SQL editor.
clip_image002
 
Here you can choose to edit within either window. The code will be kept in sync across the panels. You can right-click any of the Keys, Indexes, Triggers, Constraints in the upper window and choose to add a new one. You'll get a shell of the script to create a new object tied to this table. Modify its code to match what you want to do and save the file.
  • This is different behavior from the older Database Projects. Those would create separate files for each object by default. In SSDT, the scripts are all put together in one script unless you upgraded from a DB Project.
  • The only place SSDT supports "GO" to break up the batches is within these create scripts. You cannot use GO in a post or pre deploy script.
  • If you highlight a column in the table, you can give it a description in the Properties window. This is an easy way to propagate object/column descriptions into your SQL Server.
  • You can select the table's properties in the Properties window dropdown to edit its properties/description.
clip_image003
 
 
SQL Server Object Explorer
If you hit the F5 key inside of a project to debug the project, SSDT will build a local database, often hosted in an instance called (localdb)\Databasename. This will run the database project in a debug session. If you then open the SQL Server Object Explorer view, you can edit the project and underlying scripts.
This works in most instances where you’re using basic SQL Server functionality. If you’re using any more advanced features such as Filestream, you’ll want to change the location for this debug instance. You can change these in the project properties.
clip_image004

Double-clicking the HumanResources.EmployeePayHistory table above brings up the editor for the underlying table in the project.
clip_image005













Monday, November 5, 2012

SSDT: Updating a Project by Importing Scripts

Sometimes your developers will work on new SQL Objects and give you scripts to alter or create objects. SQL Projects support importing those scripts into your project. Start by choosing the Import Script option.
clip_image001
Find your script or scripts.
clip_image002
Select your options for import:
clip_image003
 
If you see the following text in your log, be sure to check this file to see if something was missed on import. You'll need to manually make these changes, if applicable. In a lot of cases, the statements not understood tend to be "GO" statements.
“In the script that you provided to the import operation, one or more statements were not fully understood. These statements were moved to the ScriptsIgnoredOnImport.sql file. Review the file contents for additional information.”
I've also found that "ALTER TABLE" statements are not well understood by SSDT within the imported scripts. If you get several scripts that include these and aren't understood, you can either compare the physical database to the project or manually update the project. (This has been acknowledged by Microsoft as “working as designed” even if we might wish that this could actually change the script for the object instead.)




Friday, November 2, 2012

SSDT: Updating by Using Schema Compare

Sometimes changes are made to a shared database that need to be brought into your SQL Project and there are no saved change scripts for those changes. Other times you may just want to see what will change when you publish a project. SSDT has included a Schema Compare option for SQL Projects, dacpac files, and databases.
If you are using VS2010, there are two options for SQL and Data.
clip_image001
  • The "Data" menu is used for the older DB Projects within VS2010. There's a useful data compare option in there, but the schema compare will not work for SQL 2012 or SQL Projects.
  • The "SQL" menu contains the Schema Compare item with a sub-item to do a new Schema Comparison
clip_image002
To update your project from a shared database, start a "New Schema Comparison". You'll see a screen something like this:
clip_image003
Setting your source/target is pretty straightforward and each will produce a screen something like the following:
clip_image004
You can choose to compare against an open project, a live database, or a Data-tier Application File (dacpac). In our case, we are going to select a live database as the source and our project as the target.
Once selected, you may want to change the options to exclude certain object types or ignore certain settings such as file placement. Unlike in VS2010, the options cannot be set to some default at this time. When you've set the options to your liking, click the "Compare" button.
clip_image005
If there are any differences, you'll see a list of them here. The comparison window shows where there are differences between the source and target. This can be really helpful to know whether or not to include this change in the update.
To exclude a change, clear the checkbox next to it in the upper section of the window. A green + indicates that this object will be created. A red - indicates that the object will be dropped. The blue pencil indicates that the object will be modified. At this time there is no way to hide any unchecked items that show in the compare.
Once you're satisfied with your selection, click the "Update" button to push those changes into your project.








Thursday, November 1, 2012

SSDT: Pre and Post Deploy Scripts

At some point, you will likely need to release changes that contain more than just schema changes. You may need to update data in a table, remove data that could cause a problem, or insert some data to support a code release. To support these changes, SSDT provides support for Pre-Deploy and Post-Deploy scripts.
These scripts are not created by default in a new SSDT project. To organize your scripts, you may want to create a folder structure similar to the following.
clip_image001
If you right-click the Post or Pre Deploy folders, you can choose to add a Script item.
clip_image002
You will then get a choice of which type of script to include. Choose Pre or Post deployment, as appropriate, and name the file accordingly.
You can only have one Pre-Deployment script and one Post-Deployment script per project as part of the Build! All included scripts must be set to “Not in build” or you will likely have errors.
clip_image003
If you look at the properties of these scripts, you will see that the Build Action is set to PreDeploy or PostDeploy. Opening the Post-Deploy script will show this default text.
/*
Post-Deployment Script Template       
--------------------------------------------------------------------------------------  This file contains SQL statements that will be appended to the build script.  Use SQLCMD syntax to include a file in the post-deployment script.  Example:      :r .\myfile.sql  Use SQLCMD syntax to reference a variable in the post-deployment script.  Example:      :setvar TableName MyTable                SELECT * FROM [$(TableName)]     
--------------------------------------------------------------------------------------
*/


 
Using Pre and Post Deploy Scripts
These files are interpreted as SQLCMD scripts by SSDT. You can use most valid SQLCMD syntax within the script file to include other scripts and set variables. At the time of this writing, you cannot use the :ON ERROR IGNORE command within a script. This will hopefully be addressed in a future release of SSDT.
Pre-Deploy scripts will always run before the schema changes are run. If you alter the database in a way that changes the schema, you may encounter errors with the release. E.g., if you make schema changes to the Person.Person table, but drop that table in a Pre-Deploy script, your publish action will likely fail.
Likewise, Post-Deploy scripts will always run after the publish action. This makes them a great place to insert new data, make small adjustments to the system, or perhaps implement custom security and permissions. The caveat to using post-deploy scripts to make changes is that the changes need to be repeatable. If you write an insert statement for a new lookup value, that same insert will run next time unless you check for the existing value first.
To create a new script to run in pre or post deploy:

  • Right-click the appropriate folder (Pre-Deploy, Post-Deploy, other)
  • Choose to add a "script" and select the "Script (Not in build)" option
  • Give it a name, preferably one without spaces as it will make it easier to run the pre or post deploy script.
  • Add your code changes. You may want to give it some descriptive comments and sometimes a Print statement can be helpful if you want to see progress as you run a deploy script manually.

    • Make sure that you can re-run this script without causing errors! Check for data that may already exist, use outer joins, wrap your code in Try/Catch - whatever you need to do to make sure that you can have this run again if necessary.
  • After saving the script, edit your pre or post-deploy script and add a new line in a non-commented area something like:



Run a script called "MyScript"



  1. :r .\MyScript.sql






    • This assumes that your script is in the same folder as your pre or post-deploy scripts. Adjust the relative paths as needed for the script you created.

  • Save your deploy script.

The next time you publish your project, it will pick up this script and include it in your change set. If you choose to create a script on publish, you can see the full text of your scripts included in the pre or post-deploy section of your change script.
 
Cleanup of Pre and Post Deploy Scripts
There are several options for cleanup, but one of the best suggestions I've seen has been to generate project snapshots and then remove the script references from your pre/post deploy scripts and the script files themselves from the project. They will still be saved in the snapshot, but will not be in your project anymore.  You may be able to manage this well through your Version Control System, but snapshots do have some advantages.
Pros:

  • Your project will remain somewhat clean.
  • Script files will be saved with their appropriate snapshot, ideally tied to a particular release.
  • Less concern about whether a script is replayed because it's removed.
  • Good for scripts that have a larger impact and should only be run one-time.

Cons:

  • It's a manual process and requires some attention to detail.
  • You need to look in the snapshots to see what scripts were used.
  • It may require that you have a more formalized release process.
  • You may need to publish several snapshots to bring the database up to a current version.