Need a quick Laravel app on the cloud? Deploy your Laravel application on Fly.io, you’ll be up and running in minutes!
The bane of client side pagination stems from our retrieval of entire bulky data sets in one go. But, do we really need to get the entire data set all at once?
In this article, we’ll apply a combination of data allowance and accumulation strategies to remove this “all data at once” bottleneck and improve client side pagination. We’ll do so quickly and easily, with the help of Livewire!
The Problem
Let’s imagine we have a table called “UserDetails”. It contains user addresses, phone numbers, and life-long mottos. We want to display these records in the right amount of rows that visibly makes sense. So we display it in a client paginated table, 10 rows per page. To our (anticipated) horror, the number of records grew, and so did the time it took to render the table:
Client Side Pagination has always been associated with retrieving entire data sets in one go.
Once the entire dataset is available in the client, pagination can simply be done on that stored data, removing calls to the server to display the next or previous page.
The client paginated table above was built on the same principle, where it initially waits for the entire UserDetails to get downloaded before it could render the table. So, as the size of its data grew, so did the time it took to retrieve it.
The Solution
How about, instead of getting the entire data set in one go, we get it in parts?
We can apply client side pagination to an initial subset of data stored in the client, provided the subset contains enough data to allow initial pagination—let’s say, three pages worth of data ( Data Allowance ). Then, in the background, we silently call for more data allowance to add on top of this existing subset, allowing us to eventually complete the entire data set we want to paginate( Data Accumulation )!
Set Up
Livewire provides an easy bridge of communication between PHP and JavaScript. It will allow us to easily merge existing subset of table rows in the client with remaining subsets in the server. So, let’s start by creating a Livewire component for our table: php artisan make:livewire PaginatedTable
.
Afterwards, in its component class, declare and initialize a $headers
property to list headers in the table, and a $rowCount
for the number of rows to show per page:
/* app/Http/Livewire/PaginatedTable.php */
class PaginatedTable extends Component
{
public $headers = [];
public $rowCount;
public function mount(){
// An array of visible fields of model
$this->headers = UserDetails::tableHeaders();
// Number of rows per page
$this->rowCount = 10;
}
Data Allowance
Next, let’s create a method setNextBatchData()
to retrieve our data subset. It will get the necessary number of rows per page via $rowCount
, then add in an extra pinch of “allowance” by getting two more pages worth of data:
In order to share this subset with the client, we create a data-updated
browser event through Livewire’s dispatchBrowserEvent.
We’ll later setup our view’s JavaScript to listen to this event and update our table’s content with new subsets.
/* app/Http/Livewire/PaginatedTable.php */
public function setNextBatchData(){
$data = UserDetails::limit( $this->rowCount*3 )->get();
$this->dispatchBrowserEvent('data-updated', ['data'=>$data]);
}
If we show 10 rows per page, setNextBatchData()
will retrieve 10*3 rows instead. Giving us the first page, plus two additional pages of data to work with.
Of course, we can always adjust this allowance. It can be increased or decreased such that the initial data we send back is enough for multiple pages in the client, but light enough to not cause a bottleneck in page loading.
Rendering The Table
Next, let’s set up a table in our view. We’ll set up its header row with $headers
from our Livewire component, and also include a <nav>
section for our Next
and Prev
buttons:
{!-- resources/views/livewire/paginated-table.blade.php --!}
<table>
<thead>
@foreach( $headers as $header )
<th>{{ $header }}</th>
@endforeach
</thead>
<tbody id="tBody">
</tbody>
</table>
<nav>
<button onclick="prevPage()">Prev</button>
<button onclick="nextPage()">Next</button>
</nav>
Once our HTML’s good to go, we set up JavaScript to render data received from the data-updated
event declared above. To start, declare variables we’ll be using for paginating:
Notice Livewire’s @js
directive in the script? We use it to easily convert PHP variables to their respective JavaScript counter parts:
$headers
=>mHeaders
$rowCount
=>rowCount
<script>
// Data for our table to use
let mData = [];
let mHeaders = @js( $headers );
// Pagination reference to display mData items in table
let page = 1;
let startRow = 0;
let rowCount = @js( $rowCount );
let tBody = document.getElementById("tbody");
Then, we get initial pages of our table by making a call to the setNextBatchData()
method in the server. This method will dispatch a data-updated
browser event we can listen to in order to merge new data into the mData
array:
Livewire provides the @this
directive to easily make a request to methods in a Livewire component. This is only callable once Livewire has loaded.
// Make a request to the setNextBatchData method in the server
document.addEventListener('livewire:load', function () {
@this.setNextBatchData();
});
// Merge incoming subset data into client's array of rows, `mData`
window.addEventListener('data-updated', event => {
mData = [...mData, ...event.detail.data];
renderPage();
});
Every time we get a new subset from the data-updated
event, we re-render the table’s visible rows with renderPage()
. This will clear the <tbody>
tag, then loop through mData
to insert rows into the table. It’ll use startRow
to get an item from mData
for the page’s first row, and rowCount
to determine how many more rows to display after:
function renderPage(){
// Clear the content of the table
tBody.innerHTML = '';
// Add items starting from the `startRow` index
for(
let row=startRow; row<mData.length && row<startRow+rowCount; row++
){
// Insert an item into a row
let item = mData[row];
var rowTable = tBody.insertRow(-1);
// Show item's attributes in row's cells
for( let header of mHeaders ){
var cell = rowTable.insertCell(-1);
cell.innerHTML = '<div>'+ item[header]+ '</div>';
}
}
}
Client Side Pagination
Above we initially show the first page only. Let’s now add pagination elements into our view so users can move forward and backwards across our table’s pages. Since we rely on startRow
to get the nth item from mData
to start a page, let’s calculate this first:
/* resources/views/livewire/paginated-table.blade.php */
function getStartRow( page ){
return (page*rowCount)-rowCount;
}
Then add in a nextPage()
function. It will first check if the next page’s starting row is within the bounds of the mData
array.
If so, it will increment page
, and update the startRow
. Finally it will call the renderPage()
method to re-render the <tbody>
‘s content:
function nextPage(){
var newStartRow = getStartRow( page+1 );
if( newStartRow < mData.length ){
page = page+1;
startRow = newStartRow;
renderPage();
}
}
Then create another function prevPage()
to allow our users to move to a previous page. This movement is only possible if the current page
is above page 1. If so, page
is decremented, a new startRow
is set, and the table content is re-rendered:
function prevPage(){
if( page > 1 ){
page = page-1;
startRow = getStartRow( page );
renderPage();
}
}
Since our data in the client ends at page 3, nextPage()
won’t get past that page. It’s time to add more data allowance into our client with—Data Accumulation!
Data Accumulation
To accumulate data, we simply add more data on top of an existing subset of data. This means we’ll have to merge more data into the mData
array in our client.
We can get more data through our component’s setNextBatchData()
. Let’s revise it to retrieve rows only after the last batch’s last item. To do so, we’ll have to keep a reference, $lastId
, to keep track of the last item from the previous subset:
/* app/Http/Livewire/PaginatedTable */
+ public $lastId;
public function mount(){
$this->headers = UserDetails::tableHeaders();
$this->rowCount = 10;
+ $this->lastId = 0;
}
We revise setNextBatchData()
to get rows after this id. Once the new subset is retrieved, we update the $lastId
to the current subset’s last item:
public function setNextBatchData(){
// Get Rows after id
+ $data = UserDetails::where( 'id', '>', $this->lastId )
->limit( $this->rowCount*3 )
->orderBy('id','asc')
->get();
// Update last id
+ if( $data && $data->last() )
$this->lastId = $data->last()->id;
Now that we have setNextBatchData()
set up to get the next subsets for our table, the next question is, when do we actually call it to get more data?
In a previous article, we used Livewire’s polling mechanism to periodically get data from the server, and add it on top of the data stored in our client. But this time, we’re going to be thrifty in our network calls ( and data consumption! ), and instead only call additional data whenever the user moves to the next page.
So, we call setNextBatchData()
in our client JavaScript’s nextPage()
function:
/* resources/views/livewire/paginated-table.blade.php */
function nextPage()
{
// Next page logic here...
// Get more next page allowance
@this.setNextBatchData();
}
Every time we click on the Next Page button, the table will be re-rendered to display the page requested. Then, at the end, a background call is sent to setNextBatchData()
so that we’ll still have more “data allowance” to keep our client pagination happy.
Merging more data into mData
will increase the rows stored in our client until we complete the entire dataset. We can track the number of total rows vs the rows stored in our client:
/* resources/views/livewire/paginated-table.blade.php */
window.addEventListener('data-updated', event => {
mData = [...mData, ...event.detail.newData];
renderPage();
// Display row tally
+ document.getElementById("curRows").innerHTML = 'Current Rows: '+mData.length;
+ document.getElementById("totRows").innerHTML = 'Total Rows: '+event.detail.totalCount;
});
And with that, we have our all new, lightweight, client side paginated table, no longer bogged down by the heaviness of an entire table:
Some Things to Take Notice Of
Now there are still some elephants that need to be addressed in the room:
Eventually the entire dataset is going to get downloaded into the client’s browser, and might still be a bit heavy. So as much as possible, we’d want to “reset” the data accumulated every now and then. Perhaps, reset the accumulated data whenever none-pagination table interaction occurs. Like say, during Search, filter, or even sort?
Notice how there’s just Prev and Next pages in our navigation. Can we not have other pages here? Of course we can! We can easily show the page numbers we currently have data for. What about pages not yet in our client side? Well that will take a bit more steps and a mix of server side pagination. We’ll need to get the requested page+allowance in the server, and insert those rows in the proper indices in
mData
.
Lastly, please notice, with this improved client side pagination, we now have: a lag-free movement between pages, and a less heavier, definitely lighter, client paginated table.