Java Algorithm: Pascal’s Triangle

Pascal’s Triangle

Pascal’s triangle is a problem where you want to print a triangle of a certain height where each element is the sum of the 2 elements above it.  The first row is 1, the second row is 2 1’s, and then the pattern builds from there with 1 on the ends and the other elements being the sum of their parents.

For Example:

        1
       1 1
      1 2 1
     1 3 3 1
    1 4 6 4 1

Generalized Solution

It’s always good to (first) try to solve algorithms yourself without looking at other peoples’ solutions so that you truly learn how to work them out yourself in real scenarios.

So, there may be a more efficient solution than this; but here was my approach:

  • Set a list to hold the previous row (initially empty).
  • Loop up to the required depth from 1 to D inclusively.
    • Loop for each item that should be in that level (level 1 has 1 number, level N has N numbers).
    • If it’s an end-number add “1” to the new row, otherwise add the sum of parents.
  • Print the new row.
  • Store the new row as the previous row so it can be used for the next depth level’s parent calculations.

I’m sure you can do this without storing the previous row as well mathematically, but this is pretty elegant and will only take extra space equal to the sizeof(int) * level-number which is really nothing.

Java Solution

package john.humphreys;

import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;

public class PascalsTriangle {

    public static void main(String[] args) {
        printToDepth(20);
    }

    private static void printToDepth(int d) {

        //Row 1 has value 1, anything less is invalid.
        if (d < 1) return;

        //Keep track of the previous row.
        List<Integer> previousRow = new ArrayList<>();

        //Loop from 1 to target depth inclusively.
        for (int i = 1; i <= d; ++i) {

            //Create a new row to populate with our solution.
            List<Integer> newRow = new ArrayList<>();

            //If this is a row-end (0 or max in row) add 0, otherwise add the parents' sum.
            for (int ri = 0; ri < i; ++ri) {
                newRow.add(ri == 0 || ri == i - 1 ? 1 : previousRow.get(ri - 1) + previousRow.get(ri));
            }

            //Print out the space-separated row.
            System.out.println(newRow.stream().map(Object::toString).collect(Collectors.joining(" ")));

            //Store this as the previous row.
            previousRow = newRow;
        }
    }
}

If we take out comments, gratuitous spacing, and imports, it’s quite lean:

public class PascalsTriangle {

    public static void main(String[] args) {
        printToDepth(20);
    }

    private static void printToDepth(int d) {
        if (d < 1) return;
        List<Integer> previousRow = new ArrayList<>();

        for (int i = 1; i <= d; ++i) {
            List<Integer> newRow = new ArrayList<>();
            for (int ri = 0; ri < i; ++ri) {
                newRow.add(ri == 0 || ri == i - 1 ? 1 : previousRow.get(ri - 1) + previousRow.get(ri));
            }
            System.out.println(newRow.stream().map(Object::toString).collect(Collectors.joining(" ")));
            previousRow = newRow;
        }
    }
}

SCP/SSH With Different Private Key

If you need to use SSH or SCP with a different private key file, just specify it with -i.  For example, to copy logs from a remote server using a specific private key file and user, do the following:

scp -i C:\Users\[your-user]\.ssh\pk_file [user]@[ip-addr]:/path/logs/* .

This -i will work regardless of OS, but the example is SSHing to a Linux server from a Windows server assuming you store your private keys in your user .ssh directory.

Bash – Grep (or Run Other Command) Only On Files Created This Week, Day.

Use Case

I just ran into a simple problem where I had to grep files on a server, but the directory had TONS and TONS of files in it.  I just wanted to target files created within the last week or so.

Working Command

It turns out this find command is very handy for this occasion.  It was taken and lightly modified from this unix stack-exchange post after a fair bit of searching.

find . -mtime -7 -exec grep "my_search_string" {} \;

Basically, it finds everything in “.” (the current directory) that was created in the last 7 days (as in 24 hour days, not from-this-morning days), and it executes the grep expression on it.

You can modify the timing however you want with mtime as well as change the target directory or command to execute, and of course you can pipe the output to whatever you want :).

Angular 7 Material Modal

Overview

Getting modals working in Angular + Material took me a lot longer than I expected.  But I must confess that the documentation for them here -> https://material.angular.io/components/dialog/overview was spot on.  You just have to actually read all of it.

I’m going to provide a shorter crash course here showing 100% of what you need code-wise.  I suggest you refer to that main link to understand everything fully though.

Requirements Summary

To get a modal working, assuming you already have Angular + Material working, you need to do the following.  This assumes you are just using the root @NgModule in app.module.js, but you can use other modules if you like.

  • import MatDialogModule at the top of your app.module.ts and it to your imports array in the same file.
  • import {MatDialog, MatDialogRef, MAT_DIALOG_DATA} and {Inject} in your current page’s .ts file.
  • Create a new HTML file for your modal at the same level as your current page.
  • Add a dialog component into your typescript file.
  • Write code to trigger your dialog to open.
  • Import your dialog component back in your app.module.js and register it as a declaration *and* as an entry component (you probably have to add entry components as they’re not there by default).
    • This is because dialogs are created on-the-fly and angular needs extra information to deal with ad-hoc components.

Detailed Code Example

app.module.ts

Again, if you have a multi @NgModule angular app, you can still refer to this but you may put the content in other modules.

//... (normal imports left out for brevity)
import { MatDialogModule} from '@angular/material';
import { DialogOverviewExampleDialog } from "./cs-job-monitor/cs-job-monitor.component"

@NgModule({
  declarations: [
    ...,
    DialogOverviewExampleDialog
  ],
  imports: [
    ...
    MatDialogModule
  ],
  providers: [],
  bootstrap: [AppComponent],
  entryComponents: [
    DialogOverviewExampleDialog
  ],
})
export class AppModule { }

cs-job-monitor.component.ts

This is just one of the pages in my angular project as generated by the angular CLI. It just happens to be called cs-job-monitor but that isn’t important to you.

//... (normal imports left out for brevity)
import { Inject } from '@angular/core';
import {MatDialog, MatDialogRef, MAT_DIALOG_DATA} from '@angular/material';

//Your normal page component.
@Component({
  selector: 'app-cs-job-monitor',
  templateUrl: './cs-job-monitor.component.html',
  styleUrls: ['./cs-job-monitor.component.styl']
})
export class CsJobMonitorComponent {

  constructor(private http: HttpClient, public dialog: MatDialog) {
    //Normal work.
  }

  //In my case, I am opening the modal on the "on select row" event
  //of an angular grid (ag-grid).  But this is not important, just look
  /at how it opens.
  onSelectionChanged(event: Object) {
    const dialogRef = this.dialog.open(DialogOverviewExampleDialog, {
      data: event["api"].getSelectedRows()
    });
  }
}

//Here's your dialog component.  Mine is still named after the example one from
//angular's documentation page (I'll fix that!).  But it works fine.
@Component({
  selector: 'dialog-overview-example-dialog',
  templateUrl: 'dialog-overview-example-dialog.html'
})
export class DialogOverviewExampleDialog {

  constructor(
    public dialogRef: MatDialogRef,
    @Inject(MAT_DIALOG_DATA) public data: DialogData) {}

  onNoClick(): void {
    this.dialogRef.close();
  }
}

dialog-overview-example-dialog.html

Here is the HTML that appears in your dialog when it pops up. For now, I just have it displaying the object you gave it as data as JSON. In this case, as it will display the selected rows from the ag-grid I was using to call onSelectionChanged(). But I’m not bothering to add that here.

<pre>
  {{data | json}}
</pre>

Java Regex Capture/Extract Multiple Values

Use Case

When you’re trying to parse complex log lines or extract data from complex strings, regular expression capture groups are about the most useful tool you could possibly ask for.

This example is taken from work where I had to parse and analyze some logs for loading data to a database. A log sample would look like this:

/data/SXF_SX_4906_2019-04-13.01.43.24.143.log:2019-04-13 01:43:28,320 INFO com.x.dc.db.schemagen.batch.listener.JobResultListener [tx.id=IF-TX-ID-a23c195c-673a-47ab-ab0c-7b8591821169] [main] Inside sendEmailNotification method: subject is prod alert:DB copy job STARTED for the dataset:4906

The Code

The relevant part of the code is here:

import java.util.regex.Matcher;
import java.util.regex.Pattern;

private static final String capturePattern =
"^/.*/SXF_SX_(\\d+)_(\\d{4}-\\d{2}-\\d{2}.\\d{2}.\\d{2}.\\d{2}.\\d{3}).log:(.*) INFO.*" +
"copy job (.*) for the dataset:.*"

//Leaving out rest of class, this is just the regex parsing portion.
//isValid, fulLLogEntry, dataSetId, fileTimestamp, logTimestamp, status are all
//member variables in a class where this function is a member.
public DbLoadLog(String line) {

    isValid = true;

    Pattern r = Pattern.compile(capturePattern);
    Matcher m = r.matcher(line);

    //If you wanted to run over a multi-line-string/file, you could put
    //m.find() in a while loop and keep going; but I'm just analyzing specific lines.
    if (m.find()) {
        fullLogEntry = line;
        dataSetId = Integer.valueOf(m.group(1));
        fileTimestamp = m.group(2);
        logTimestamp = m.group(3);
        status = m.group(4);
    }
    else {
        isValid = false;
    }
}

 

Java – Regular/Scheduled Task, One Run at a Time

This will be a very short post, and I’m mostly writing it just so it sticks in my head better.

Common Use Cases

There are many times when you may find that you need to regularly run a task in Java.  Here are a few common examples:

  • You have a cache you need to refresh every X minutes to power a dashboard or something similar.
  • You need to prune old files from a file system once an hour.
  • You need to regularly update stats counters for monitoring.

Coding Options

There are a lot of ways to do this, but the recommended approach would be to use a scheduled executor.  Now… this part is easy to remember, but what is sometimes hard to remember is that you have two options when scheduling a task.  I often find myself picking the wrong one as it pops up in Intelli-sense and I forget there are 2 options.

  1. Run the task every X seconds/minutes/etc no matter what.
  2. Run the task every X seconds/minutes/etc *after* the previous task completed.

These two things can be very different.  If you have a task that only takes a couple of seconds, it probably doesn’t matter much.  But if you have a task that takes 2 minutes and you’re running it every 1 minute, then with option 1 you will always be running at least 2 copies of the task, whereas with option 2 you’ll just be running one copy at a time with a minute of buffer in between each task.

For both options, you can create the scheduled executor service the same way:

ScheduledExecutorService se = Executors.newSingleThreadScheduledExecutor();

But for option #1 (run every interval regardless of previous tasks), you would use this function:

se.scheduleAtFixedRate(this::refreshCache, 10, 120, TimeUnit.SECONDS);

And for option #2 (start counting after previous task completes), you would use this function.

se.scheduleWithFixedDelay(this::refreshCache, 10, 120, TimeUnit.SECONDS);

 

Does Spring JdbcTemplate Close Connections? … Not Always.

Common Advice – Correct?

Decent developers usually know that they have to try/catch/finally to ensure they clean up connections, file handles, or any number of things.  But then, for Java, you hear “just use JdbcTemplate! it does all this boilerplate for you!”.

Uncommon Scenario

Normally when you’re writing an average app, you generally want lots of queries to be able to run in parallel, efficiently, using the same user and password.  In this case, you can easily just use a connection pool and “not worry about it”.  Spring JdbcTemplates will just grab connections from your data source and pool them appropriately based on the data source.  You don’t have to worry about if they are opened, closed, or whatever.

I ran into a scenario today where that was not true though.  I have an app where each user connects to each back-end data-source using their own personal account which is managed by the application itself.  So, each user needs his or her own connection.  So… pooling would not make much sense unless each user had to do parallel operations (which they don’t).

What Happens to the Connections?

So, here’s the fun part.  I had, for the longest time, assumed that JdbcTemplates would clean up connections in addition to results sets.  In fact, you’ll see this online a lot.  But be careful!  This does not appear to be the case, or if it is, it is at least data source dependent… and that actually makes sense if you think about their purose.

Here is how I verified this. I created a JdbcTemplate which is based on a new data source each time (which is needed as the user/password change).

private NamedParameterJdbcTemplate getJdbcTemplate(String email, String password) {
    SimpleDriverDataSource ds = new SimpleDriverDataSource();
    ds.setDriverClass(HiveDriver.class);
    ds.setUrl(url);
    ds.setUsername(email);
    ds.setPassword(password);
    return new NamedParameterJdbcTemplate(ds);
}

Then I used the template for a number of queries in a normal manner (like this):

getDirectHiveJdbcTemplate(email, catalog)
.queryForList("describe extended `mytable`.`mytable`",
new MapSqlParameterSource())

Then I took a heap dump of the process with this command (run it from your command line in your JDK bin folder in Program Files or the Linux install location with minor changes):

jmap.exe -F -dump:format=b,file=C:\temp\dump.bin your-pid

You can get the PID easily by looking at your running process from JVisualVM (which is also in the bin directory).

Once the dump is complete, load the file into JVisualVM (you need to use the 3rd option of file type to make it go in, I think its pattern is . or something.

Finally, go to the classes tab, go to the very bottom of the screen, and search for the class of interest (in my case HiveConnection). I can see as many instances as I have run queries as each query made a new connection from a new data source. They are definitely not being cleaned up.

This surprised me because even though creating a new template/data-source each time is not normal, I expected them to clean up the connections when they were garbage collected or as part of normal operations.  After thinking about it more, I realize operations in my case would not me “normal”, but the lack of clean up when out of scope still definitely is a surprise to me.