Four Stars

## How to get first record per key based on max value of a column?

Hi there,

I am struggling with this issue.

Let say, I have this table -

Code,Item,Start_Date,End_Date,Item_Price

1,888,2018-01-01,2018-01-03,1

1,888,2018-01-04,2018-01-05,2

1,888,2018-01-07,2018-01-08,3

1,888,2018-01-09,2018-01-10,4

1,888,2018-01-11,null,5

1,999,2018-01-01,2018-01-02,1

1,999,2018-01-03,2018-01-04,2

1,999,2018-01-05,2018-01-08,3

1,999,2018-01-09,2018-01-10,4

1,999,2018-01-11,null,5

Key Columns - Code + Item

Lets say for a Current_date =2018-01-06, I want to retrieve immediate Previous Record (PR) and immediate Next Record (NR) by comparing Start_Date from table to the Current_date.

Outputs -

Immediate Previous Record (PR)-->

1,888,2018-01-04,2018-01-05,2

1,999,2018-01-05,2018-01-08,3

Immediate Next Record (NR) -->

1,888,2018-01-07,2018-01-08,3

1,999,2018-01-09,2018-01-10,4

I am using 2 separate flows from tMap to get PR and NR records respectively. In the PR flow, only those records with Start_Date less than Current Date are passed, and in NR, only those records with Start_Date greater than Current Date are passed.

|---> tSortRow (Sort Start_Date in Desc) ---> tUniqueRow( with keys - Code + Item ) --->PR

tmap --|

|---> tSortRow (Sort Start_Date in Asc) ---> tUniqueRow( with keys - Code + Item ) --->NR

Problem - tUniqueRow randomly pics a record a passes in the output. What I need is the top first record coming in the tUniquerow. How can I achieve this? Am I going wrong somehwhere?

Help would be appreciated. Thanks.

Employee

## Re: How to get first record per key based on max value of a column?

Hi,

The most reliable method while handling the previous and next records is to assign a sequence number to each record as the first step. Then you can store the data to a HashOutput.

You can read this hashoutput records with multiple thashinput components and join the  records in a tmap based on the sequence number. For getting previous record data, left join with main flow.seq_id=previous_lookup_flow.seq_id+1. For next record, you will have to assign - instead of +.

Once the data is arranged properly in tmap using this method, then you can go to further activities like sorting and picking unique records.

If you are still facing the issue, please share the screen shots of the job flow with component screen shots and we can take it from there.

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved :-)

Four Stars

## Re: How to get first record per key based on max value of a column?

Hi Nikhil,

What you mentioned until tMap is correct. But after it flows into tSortRow, it sorts the records based on order I specify. After this, when it flows into the tUniqueRow, I simply want the first record per key to come out as Unique output, rest all should go into the duplicates. But this isn't happening. It randomly picks a record per key instead of first record per key. Is there any way, I can just get the first record per key?

Thanks!

Employee

## Re: How to get first record per key based on max value of a column?

Hi,

Could you please share the screen shots of the job flow, sample data before the tSortRow (print using a tlogrow), sample data after tSortRow, component details screenshot of tSortRow (including its schema). You can very well do the arrangement of data and pick the first one. But I would like to see how the arrangement is done based on current data.

If you could upload the sample data which is available before tSortRow component in a csv file, it would be really great for my analysis.

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved :-)

Four Stars

## Re: How to get first record per key based on max value of a column?

Hi Nikhil,

Attaching the snapshot of the job. I observed a very strange thing while developing this very job.

This is a Big data Spark job. When I perform the above operations in  Local Mode, I get the correct results each time I run the job with the  same data. (Please refer to the sample data at the top)

But, when I run the job on Job Server (Amazon EMR 5.5.0 with Yarn Client as Spark Mode), the output from tUniqueRow is a random unique record and not the top record per key. (Sometimes I get the correct result, but most of the times I don't.)

Previously, when I posted the question, I was working on Cluster, but then I created a separate job in local mode and it worked absolutely fine, as soon as I unchecked the Spark configuration to run on Cluster, it gave me random outputs.

I have tested this number of times, both in the Local Mode as well as on Cluster.

This seems to be very strange. Am I missing out on something?

## OPEN STUDIO FOR DATA INTEGRATION

Kickstart your first data integration and ETL projects.

## What’s New for Talend Summer ’19

Watch the recorded webinar!

Watch Now

## Best Practices for Using Context Variables with Talend – Part 4

Pick up some tips and tricks with Context Variables

Blog

## How Media Organizations Achieved Success with Data Integration

Learn how media organizations have achieved success with Data Integration