While going through some of the tutorials for HDInsight I decided to test adding my own data set to Azure and running some Hive queries against it. There are 2 ways that I was able to do this.
The first way was using Azure Storage Explorer. This has a nice UI that lets you click and upload your files. The only challenge I found with it was being able to place the file in a specific folder/directory. Come AzurePowerShell.
|$destContext = New-AzureStorageContext|
Here it will prompt for the StorageAccountName, and the StorageAccountKey. You can find those in Azure by going to “Storage,” Selecting the storage associated with your HDInsight environment, and then clicking on “Manage Access Keys” down at the bottom of the page.
A new window will pop up with the information:
Once you enter that information, create a new variable with the location of the file you’re trying to upload:
|$fileName = “C:\Users\Victor\Desktop\sampleFile.txt”|
Next, run the command to upload the file to your specified location:
|Set-AzureStorageBlobContent -File $fileName -Container <containerName> -Blob <blobName> -context $destContext|
The containerName can be found by accessing your storage and clicking on Containers at the top:
The BlobName is the name and location of the file you are uploading. So if you want the file to be uploaded to the “Samples” folder, your BlobName value would be “Samples\samplefile.txt”. After executing the command you should get a confirmation that the file was uploaded, and you can access it through the File Browser in your Query Console