Google Drive API v3 + Sheets + Shared Drives in Java

There are plenty of examples of how to use the Google Drive API online. A ton are for old versions though, and most are basic cases (not good with restricted sharing options/etc). Also, virtually none show you how to do things with shared drives.

I had to do all of this recently, so I hope this helps someone else avoid the pain I went through =). The only thing this assumes is that you have a valid credentials file generated from the developer console.

Defining Scopes

These scopes should all be enabled for your credentials on the consent screen part of the developer console. Also list them in your code.

static {
SCOPES = new ArrayList<>();
SCOPES.add(SheetsScopes.DRIVE);
SCOPES.add(SheetsScopes.DRIVE_FILE);
SCOPES.add(SheetsScopes.SPREADSHEETS);
}

Get Credentials

private HttpRequestInitializer getCredentials(NetHttpTransport httpTransport) {
    GoogleCredential credential = null;
    try {
        credential = GoogleCredential.fromStream(new FileInputStream(credentialsFilePath), httpTransport, JSON_FACTORY)
                .createScoped(SCOPES)
                .createDelegated(svcAccount);
    } catch (IOException e) {
        logger.error("ERROR Occurred while Authorization using the credentials provided...!!!");
    }
    return setHttpTimeout(credential);
}

Get Sheet and Drive Services

private Sheets getSheetService(String applicationName, NetHttpTransport httpTransport) throws FileNotFoundException {
return new Sheets.Builder(
httpTransport,
JSON_FACTORY,
getCredentials(httpTransport)
).setApplicationName(applicationName).build();
}

private Drive getDriveService(String applicationName, NetHttpTransport HTTP_TRANSPORT) throws FileNotFoundException {
return new Drive.Builder(HTTP_TRANSPORT,
JSON_FACTORY,
getCredentials(HTTP_TRANSPORT))
.setApplicationName(applicationName)
.build();
}

Create a Spreadsheet and Control Permissions

You can create a sheet easily with the sheet service. But, if you want to put your sheet in a specific parent folder and change permissions/control sharing settings, then you need to create it with the drive service setting a mime-type of sheet.

You can find your folder ID by navigating to your folder in google drive and getting the ID from the URL. Since we set “supports all drives”, we can create this file in a folder in our share drive. Without this setting, share drives fail with some kind of auth error.

private File createSpreadSheet(Drive driveService, String sheetTitle, String userFolderId) {
try {
File fileSpec = new File();
fileSpec.setName(sheetTitle);
fileSpec.setParents(Collections.singletonList(userFolderId));
fileSpec.setMimeType("application/vnd.google-apps.spreadsheet");

File sheetFile = driveService.files()
.create(fileSpec)
.setSupportsAllDrives(true) //Share drives don't work without this parameter.
.execute();

sheetFile.setViewersCanCopyContent(false);
sheetFile.setCopyRequiresWriterPermission(true);
sheetFile.setWritersCanShare(false);
driveService.files().update(sheetFile.getId(), sheetFile);

return sheetFile;
} catch (IOException e) {
throw new RuntimeException("Error occurred while creating the sheet.\n" + e);
}
}

Write Data to a Spreadsheet

private void writeToSpreadSheet(Sheets service, String spreadSheetId, String json) {
    final String range = "Sheet1";
    ValueRange body = new ValueRange()
            .setValues(getJsonData(json));
    UpdateValuesResponse response;
    try {
        response = service
                .spreadsheets()
                .values()
                .update(spreadSheetId, range, body)
                .setValueInputOption(VALUE_INPUT_OPTION)
                .execute();
    } catch (IOException e) {
        throw new RuntimeException("ERROR Occurred while insert / updating the values in Google Spread Sheet : " + spreadSheetId + "\n" + e);
    }
    logger.info(response.getUpdatedCells() + " cells updated.");
}

Find a Folder in Another Folder

private String getFolderIdIfExists(Drive driveService, String folderName) throws IOException {

    FileList folders = driveService.files().list()
            .setSupportsAllDrives(true)
            .setIncludeItemsFromAllDrives(true)
            .setQ(String.format("'%s' in parents and mimeType = 'application/vnd.google-apps.folder' and name = '%s'",
                    mainFolderId, folderName))
            .execute();

    return folders.getFiles().size() == 1 ? folders.getFiles().get(0).getId() : null;
}

Create a Folder In a Specific Folder

private String createUserFolderAndGetId(Drive driveService, String folderName) throws IOException {

    File fileSpec = new File();
    fileSpec.setName(folderName);
    fileSpec.setParents(Collections.singletonList(mainFolderId));
    fileSpec.setMimeType("application/vnd.google-apps.folder");

    File targetFolder = driveService.files()
            .create(fileSpec)
            .setSupportsAllDrives(true) //Share drives don't work without this parameter.
            .execute();

    return targetFolder.getId();
}

Helm 3 / GitLab Uninstall If Exists

Helm 3 does not seem to have a good way to “uninstall if exists” unfortunately. So, we had to find a way around that to make sure we could wipe out a previous deployment reliably, in CI/CD (in cases where we had to change a deployment version, which is rare).

As we use GitLab, we found this trick in the docs:

If any of the script commands return an exit code different from zero, the job will fail and further commands won’t be executed. This behavior can be avoided by storing the exit code in a variable:

job:
script:
- false || exit_code=$?
- if [ $exit_code -ne 0 ]; then echo "Previous command failed"; fi;

Using this, you can do:

helm uninstall -n your-namespace some-deployment-0-0-5 || exit_code=$?

And, while you’ll receive a note that it didn’t work on any release after it’s gone, the pipeline will continue on fine.

IntelliJ Maven Not Resolving Dependencies / Not Applying Excludes

I recently had a ton of issues with dependencies in IntelliJ with maven, on multiple consecutive occasions. This is pretty odd as I’ve used IntelliJ and Maven for probably around 10 years (I even have the top youtube videos on that combination!).

I’m on IntelliJ 2020.1 currently, and I found a few things through painful trial and error here. I hope they help you.

  1. Apparently at some point they removed that “Automatically Import” option that used to pop up when you created/imported a maven project. This used to make things automatically resolve as you changed your POM. Now you need to make sure you build to pull in dependencies, and for safe measure I also click the “re-import” button on the maven tools tab (little icon at the top row).
  2. There is now an “offline” mode. So, you may keep failing and failing to install because you can’t resolve dependencies on, say, maven central. This is super confusing as you may actually be online and looking at maven central and seeing the dependencies there. If this happens, check/disable offline mode!
  3. You may add excludes to a complex dependency (like the hive metastore in my case) to remove transitive dependencies that break your app/framework (like spring boot). You shold make sure to change the POM, clean, and then re-import the maven project again to ensure they’re really gone. I kept seeing them in the external dependencies list in the object browser, and my builds were failing, until I did the last step of re-importing.
  4. If all else fails, clean, invalidate-and-restart (in the file menu), install, and re-import and it seems to be a good catch-all for when you’re completely lost.

This all seems pretty crazy to me, but I’ve gone through it a few times now and it seems right. I hope it helps you save some of the time I wasted!

Presto Resource Groups Practical Notes

I recently had to start using resource groups in Presto. I’ll expand this over time with example configurations and such, but for now, I’m just taking some notes on things that are not necessarily obvious.

Concurrency Limit vs Connection Pool Size

Being a Java guy, I always visualize any database work as if it’s being done from a connection pool. Without any resource groups, I was able to use hundreds of parallel queries against presto, which requires hundreds of connections in a Java connection pool.

When we added resource groups with concurrency limits, I was curious – if I have a connection pool of 100 and launch 100 queries in Java, and I have a hard concurrency limit for that user/group of 25, what happens?

Presto will let you launch the 100 parallel connections/queries from Java, and it will queue 75 of those queries/connections, assuming your queue size in the resource group is > 75. If your queue size was 50 though, you would have 25 running queries, 50 queued queries, and 25 queries would fail with a note about resources being exceeded on the cluster like this:

Caused by: java.sql.SQLException: Query failed (#20200704_001046_01778_pw9xr): Too many queued queries for “global.users.john.humphreys”

CPU Limits – Practical Effects

You can put soft and hard limits on CPU. They are a little hard to calculate though; you have to think in terms of total cores in the cluster an the period in which the limits are checked. E.g. if your period is 30 minutes, and you have 10 worker servers, and you have 32 cores a server, then there are 30 * 10 * 32 = 9,600 minutes available on your cluster in that period. So, you can assign a user/group, say, 3,200 minutes to give them 1/3 of the cluster time.

This will *not* prevent them from using 100% CPU on the cluster for an hour though. If they start 25 parallel queries (keeping our 25 limit from earlier), and all queries run for > 1 hour and use all CPUs, presto does *not* have advanced enough logic to restrict/penalize those running queries until they are done.

New queries after that will be severely penalized though. E.g. I tested huge queries with a 5 minute period, and giving a user 10% of the cluster on CPU limits. As the queries used the whole cluster for much more than 5 minutes, that user was not allowed to run queries for over an hour! So, a user can get penalized for many times the original period.

This last part made it hard for me to use the limits. It would mean one harsh query by a production user could cause them to basically not be able to run their app for many hours.

Also, since CPU limits are a hard number rather than percentage based (like memory), it means that if you auto scale your presto (custom, or starburst, or EMR), users can not take advantage of more of the cluster while still being restricted from using the whole cluster.

All in all, I found the CPU limits not amazingly useful as a whole, but they may be useful for keeping ad-hoc users from using much of the cluster. E.g. allow applications to do what they need to, but stop random users from doing damage with concrete limits.

Also note – you can specify CPU limits and period in any units you like. So, if it helps you, use hours or whatever to keep the numbers smaller – but don’t forget to do the math based on node count & core count on the actual limits. The period obviously is not related to the # nodes or cores, so don’t confuse that part.

Sub Groups

Sub groups are in most examples. I would point out that they are probably the most powerful thing you should make use of. They let you, say, group all ad-hoc users together and say all ad-hoc users combined can’t use more than 40% of the cluster memory, but any one ad-hoc user can use up to 20%. That way you can protect the overall cluster while still ensuring at least 2 users can make use of their max memory amount in parallel (very useful).

Sub-groups can be dynamically named based on the user, and you can do multiple. E.g. we put all our application users in one group and subgroup, and our ad-hoc users in another one with far less resources. App users start with “app.”, so this is really easy to pull of with their pattern support.

Presto / Hive find parquet files touched/referenced by a query/predicate.

We had a use case where we needed to find out which parquet files were touched by a query/predicate.  This was so that we could rewrite certain files in a special way to remove specific records.   In this case, presto was not mastering the data itself.

We found this awesome post -> https://stackoverflow.com/a/44011639/857994 on stack overflow which shows this pseudo-column:

select "$path" from table
This correctly shows you the parquet file a row came from, which is awesome!  I also found this  MR  which shows work has been merged to add $file_size  and $file_modified_time properties which is even cooler.
So, newer versions of presto-sql have even more power here.