I'm looking for a Play Framework pagination example. Please help me with a good reference link.
I searched for it and don't have java code.
I'll provide a Scala solution and you can adjust your code.
You could create a utility class called Paginator:
package util
case class Paginator[A](items: Seq[A], page: Int, offset: Long, total: Long) {
lazy private val pageSize: Int = 20 // you can adjust to your preference
lazy val prevPage: Option[Int] = Option(page - 1).filter(_ >= 0)
lazy val nextPage: Option[Int] = Option(page + 1).filter(_ => (offset + items.size) < total)
lazy val numberOfPages: Float = if (total % pageSize > 0) { total / pageSize + 1 }
else { total / pageSize }
}
In your controller you will do something like:
def list(currentPage: Int) = Action.async { implicit request =>
val offset: Int = currentPage * resultsPerPage
// Put your logic here to get the `listOfItems` you want to display
// and the number of `totalItems`
// and then return Ok and call your view
Ok(views.html.list(Paginator(listOfItems, currentPage, offset, totalItems)))
}
and finally in your view you will have something like:
#import util.Paginator
#(paginatedList: Paginator[Item])(implicit msg: play.api.i18n.MessagesProvider)
#main(msg.messages("label.list_of_items")) {
<h1>#msg.messages("label.paginated_list")</h1>
#* populate your list here *#
#* and this is your paginator *#
<div class="pagination">
#paginatedList.prevPage.map { page =>
<a href="#link(page)">#msg.messages("label.previous")<a>
}.get
#for(pageNumber <- 1 to paginatedList.numberOfPages.toInt) {
#pageNumber
}
#paginatedList.nextPage.map { page =>
#msg.messages("label.next")
}.get
</div>
}
I got the original idea from this project and I've expanded on it. You can also improve the paginator by using getOrElse to provide alternative/disabled links when being on the first/last page, or to disable the current page number etc.
I hope this helps.
Related
I am trying an api and I am trying to collect multiple questions with equal distribution.
here is what I have tried so far:
import React from "react"
import axios from "axios";
export default function App() {
const numberOfQuestions = 15;
const typesOfQuestions = [{id:12}, {id:13}, {id:14}, {id:17}];
const divide = numberOfQuestions / typesOfQuestions.length;
let remaining = numberOfQuestions % typesOfQuestions.length;
let newArr = [];
newArr = typesOfQuestions.map(async category => {
await axios.get(`https://opentdb.com/api.php?amount=${(divide + remaining)}&type=multiple&category=${category.id}`)
.then(res => {
return (newArr.push(res.data.results));
})
remaining = 0;
})
console.log(newArr);
}
an example of a working api
https://opentdb.com/api.php?amount=2&type=multiple&category=12
output:
{"response_code":0,"results":[{"category":"Entertainment: Music","type":"multiple","difficulty":"easy","question":"Who is the frontman of the band 30 Seconds to Mars?","correct_answer":"Jared Leto","incorrect_answers":["Gerard Way","Matthew Bellamy","Mike Shinoda"]},{"category":"Entertainment: Music","type":"multiple","difficulty":"easy","question":"Which 80s band is fronted by singer\/guitarist Robert Smith?","correct_answer":"The Cure","incorrect_answers":["The Smiths","Echo & the Bunnymen","New Order"]}]}
I have taken over a project using gql and django. It completely follows the structure of the following example: https://stackabuse.com/building-a-graphql-api-with-django/
Now i want to add extra functionality. Ontop of sending databasevalues to the front end, i would also like to send aggregated ones aswell, like number of elements etc. I've added this to the Node as follows:
class DatabaseDataNode(DjangoObjectType):
model = DatabaseAnnotationData
print(DatabaseAnnotationData.objects.filter(database_id__exact=5).count()) #Prints correct value
total = DatabaseAnnotationData.objects.filter(database_id__exact=5).count()
class Meta:
print('Hello world')
filterset_fields = {
'databaseid': ['exact'],
'name': ['exact', 'icontains', 'istartswith'],
'imagefile': ['exact', 'icontains', 'istartswith'],
'annotations': ['exact'],
'created_at': ['exact'],
'modified_at': ['exact'],
'slug': ['exact'],
'total': ['exact'] # <-- Doesn't work
}
interfaces = (relay.Node, )
I also add it to the query:
const DATABASE = gql`
query Database($name: String, $first: Int, $offset: Int) {
database(databasename: $name) {
databaseid
id
name
total <---------- Added
modifiedAt
images(first: $first, offset: $offset) {
edges {
node {
imageid
imagefile
width
height
objectdataSet{
edges{
node{
x1
y1
width
height
concepts{
edges{
node{
conceptId
}
}
}
objectid
imageid{
imageid
}
}
}
}
}
}
}
}
}
`
But when i run it i get an error saying "Cannot query field "total" on type "DatabaseDataNode"
How can i implement this, to send the other value aswell?
I am working with Firestore right now and have a little bit of a problem with pagination.
Basically, I have a collection (assume 10 items) where each item has some data and a timestamp.
Now, I am fetching the first 3 items like this:
Firestore.firestore()
.collection("collectionPath")
.order(by: "timestamp", descending: true)
.limit(to: 3)
.addSnapshotListener(snapshotListener())
Inside my snapshot listener, I save the last document from the snapshot, in order to use that as a starting point for my next page.
So, at some time I will request the next page of items like this:
Firestore.firestore()
.collection("collectionPath")
.order(by: "timestamp", descending: true)
.start(afterDocument: lastDocument)
.limit(to: 3)
.addSnapshotListener(snapshotListener2()) // Note that this is a new snapshot listener, I don't know how I could reuse the first one
Now I have the items from index 0 to index 5 (in total 6) in my frontend. Neat!
If the document at index 4 now updates its timestamp to the newest timestamp of the whole collection, things start to go down.
Remember that the timestamp determines its position on account of the order clause!
What I expected to happen was, that after the changes are applied, I still show 6 items (and still ordered by their timestamps)
What happened was, that after the changes are applied, I have only 5 items remaining, since the item that got pushed out of the first snapshot is not added to the second snapshot automatically.
Am I missing something about Pagination with Firestore?
EDIT: As requested, I post some more code here:
This is my function to return a snapshot listener. Well, and the two methods I use to request the first page and then the second page I posted already above
private func snapshotListener() -> FIRQuerySnapshotBlock {
let index = self.index
return { querySnapshot, error in
guard let snap = querySnapshot, error == nil else {
log.error(error)
return
}
// Save the last doc, so we can later use pagination to retrieve further chats
if snap.count == self.limit {
self.lastDoc = snap.documents.last
} else {
self.lastDoc = nil
}
let offset = index * self.limit
snap.documentChanges.forEach() { diff in
switch diff.type {
case .added:
log.debug("added chat at index: \(diff.newIndex), offset: \(offset)")
self.tVHandler.dataManager.insert(item: Chat(dictionary: diff.document.data() as NSDictionary), at: IndexPath(row: Int(diff.newIndex) + offset, section: 0), in: nil)
case .removed:
log.debug("deleted chat at index: \(diff.oldIndex), offset: \(offset)")
self.tVHandler.dataManager.remove(itemAt: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), in: nil)
case .modified:
if diff.oldIndex == diff.newIndex {
log.debug("updated chat at index: \(diff.oldIndex), offset: \(offset)")
self.tVHandler.dataManager.update(item: Chat(dictionary: diff.document.data() as NSDictionary), at: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), in: nil)
} else {
log.debug("moved chat at index: \(diff.oldIndex), offset: \(offset) to index: \(diff.newIndex), offset: \(offset)")
self.tVHandler.dataManager.move(item: Chat(dictionary: diff.document.data() as NSDictionary), from: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), to: IndexPath(row: Int(diff.newIndex) + offset, section: 0), in: nil)
}
}
}
self.tableView?.reloadData()
}
}
So again, I am asking if I can have one snapshot listener that listens for changes in more than one page I requested from Firestore
Well, I contacted the guys over at Firebase Google Group for help, and they were able to tell me that my use case is not yet supported.
Thanks to Kato Richardson for attending to my problem!
For anyone interested in the details, see this thread
I came across the same use case today and I have successfully implemented a working solution in Objective C client. Below is the algorithm if anyone wants to apply in their program and I will really appreciate if google-cloud-firestore team can put my solution on their page.
Use Case: A feature to allow paginating a long list of recent chats along with the option to attach real time listeners to update the list to have chat with most recent message on top.
Solution: This can be made possible by using pagination logic like we do for other long lists and attaching real time listener with limit set to 1:
Step 1: On page load fetch the chats using pagination query as below:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
[self fetchChats];
}
-(void)fetchChats {
__weak typeof(self) weakSelf = self;
FIRQuery *paginateChatsQuery = [[[self.db collectionWithPath:MAGConstCollectionNameChats]queryOrderedByField:MAGConstFieldNameTimestamp descending:YES]queryLimitedTo:MAGConstPageLimit];
if(self.arrChats.count > 0){
FIRDocumentSnapshot *lastChatDocument = self.arrChats.lastObject;
paginateChatsQuery = [paginateChatsQuery queryStartingAfterDocument:lastChatDocument];
}
[paginateChatsQuery getDocumentsWithCompletion:^(FIRQuerySnapshot * _Nullable snapshot, NSError * _Nullable error) {
if (snapshot == nil) {
NSLog(#"Error fetching documents: %#", error);
return;
}
///2. Observe chat updates if not attached
if(weakSelf.chatObserverState == ChatObserverStateNotAttached) {
weakSelf.chatObserverState = ChatObserverStateAttaching;
[weakSelf observeChats];
}
if(snapshot.documents.count < MAGConstPageLimit) {
weakSelf.noMoreData = YES;
}
else {
weakSelf.noMoreData = NO;
}
[weakSelf.arrChats addObjectsFromArray:snapshot.documents];
[weakSelf.tblVuChatsList reloadData];
}];
}
Step 2: On success callback of "fetchAlerts" method attach the observer for real time updates only once with limit set to 1.
-(void)observeChats {
__weak typeof(self) weakSelf = self;
self.chatsListener = [[[[self.db collectionWithPath:MAGConstCollectionNameChats]queryOrderedByField:MAGConstFieldNameTimestamp descending:YES]queryLimitedTo:1]addSnapshotListener:^(FIRQuerySnapshot * _Nullable snapshot, NSError * _Nullable error) {
if (snapshot == nil) {
NSLog(#"Error fetching documents: %#", error);
return;
}
if(weakSelf.chatObserverState == ChatObserverStateAttaching) {
weakSelf.chatObserverState = ChatObserverStateAttached;
}
for (FIRDocumentChange *diff in snapshot.documentChanges) {
if (diff.type == FIRDocumentChangeTypeAdded) {
///New chat added
NSLog(#"Added chat: %#", diff.document.data);
FIRDocumentSnapshot *chatDoc = diff.document;
[weakSelf handleChatUpdates:chatDoc];
}
else if (diff.type == FIRDocumentChangeTypeModified) {
NSLog(#"Modified chat: %#", diff.document.data);
FIRDocumentSnapshot *chatDoc = diff.document;
[weakSelf handleChatUpdates:chatDoc];
}
else if (diff.type == FIRDocumentChangeTypeRemoved) {
NSLog(#"Removed chat: %#", diff.document.data);
}
}
}];
}
Step 3. On listener callback check for document changes and handle only FIRDocumentChangeTypeAdded and FIRDocumentChangeTypeModified events and ignore the FIRDocumentChangeTypeRemoved event. We are doing this by calling "handleChatUpdates" method for both FIRDocumentChangeTypeAdded and FIRDocumentChangeTypeModified event in which we are first trying to find the matching chat document from local list and if it exist we are removing it from the list and then we are adding the new document received from listener callback and adding it to the beginning of the list.
-(void)handleChatUpdates:(FIRDocumentSnapshot *)chatDoc {
NSInteger chatIndex = [self getIndexOfMatchingChatDoc:chatDoc];
if(chatIndex != NSNotFound) {
///Remove this object
[self.arrChats removeObjectAtIndex:chatIndex];
}
///Insert this chat object at the beginning of the array
[self.arrChats insertObject:chatDoc atIndex:0];
///Refresh the tableview
[self.tblVuChatsList reloadData];
}
-(NSInteger)getIndexOfMatchingChatDoc:(FIRDocumentSnapshot *)chatDoc {
NSInteger chatIndex = 0;
for (FIRDocumentSnapshot *chatDocument in self.arrChats) {
if([chatDocument.documentID isEqualToString:chatDoc.documentID]) {
return chatIndex;
}
chatIndex++;
}
return NSNotFound;
}
Step 4. Reload the tableview to see the changes.
my solution is to create 1 maintainer query - listener to observe on those removed item from first query, and we will update it every time there's new message coming.
To make pagination with snapshot listener first we have to create reference point document from the collection.After that we are listening to collection based on that reference point document.
Let's you have a collection called messages and timestamp called createdAt with each document in that collection.
//get messages
getMessages(){
//first we will fetch the very last/latest document.
//to hold listeners
listnerArray=[];
const very_last_document= await this.afs.collectons('messages')
.ref
.limit(1)
.orderBy('createdAt','desc')
.get({ source: 'server' });
//if very_last.document.empty property become true,which means there is no messages
//present till now ,we can go with a query without having a limit
//else we have to apply the limit
if (!very_last_document.empty) {
const start = very_last_document.docs[very_last_document.docs.length - 1].data().createdAt;
//listner for new messages
//all new message will be registered on this listener
const listner_1 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.endAt(start) <== this will make sure the query will fetch up to 'start' point(including 'start' point document)
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
},
err => {
//on error
})
//old message will be registered on this listener
const listner_2 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.limit(20)
.startAfter(start) <== this will make sure the query will fetch after the 'start' point
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
this.listenerArray.push(listner_1, listner_2);
},
err => {
//on error
})
} else {
//no document found!
//very_last_document.empty = true
const listner_1 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
},
err => {
//on error
})
this.listenerArray.push(listner_1);
}
}
//to load more messages
LoadMoreMessage(){
//Assuming messages array holding the the message we have fetched
//getting the last element from the array messages.
//that will be the starting point of our next batch
const endAt = this.messages[this.messages.length-1].createdAt
const listner_2 = this.getService
.collections('messages')
.ref
.limit(20)
.orderBy('createdAt', "asc") <== should be in 'asc' order
.endBefore(endAt) <== Getting the 20 documnents (the limit we have applied) from the point 'endAt';
.onSnapshot(messages => {
if (messages.empty && this.messages.length)
this.messages[this.messages.length - 1].hasMore = false;
for (const message of messages.docChanges()) {
if (message.type === "added")
//do the job...
if (message.type === "modified")
//do the job
if (message.type === "removed")
//do the job
}
},
err => {
//on error
})
this.listenerArray.push(listner_2)
}
Google's "Report a Bug" or "Feedback Tool" lets you select an area of your browser window to create a screenshot that is submitted with your feedback about a bug.
Screenshot by Jason Small, posted in a duplicate question.
How are they doing this? Google's JavaScript feedback API is loaded from here and their overview of the feedback module will demonstrate the screenshot capability.
JavaScript can read the DOM and render a fairly accurate representation of that using canvas. I have been working on a script which converts HTML into a canvas image. Decided today to make an implementation of it into sending feedbacks like you described.
The script allows you to create feedback forms which include a screenshot, created on the client's browser, along with the form. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.
It does not require any rendering from the server, as the whole image is created on the client's browser. The HTML2Canvas script itself is still in a very experimental state, as it does not parse nearly as much of the CSS3 attributes I would want it to, nor does it have any support to load CORS images even if a proxy was available.
Still quite limited browser compatibility (not because more couldn't be supported, just haven't had time to make it more cross browser supported).
For more information, have a look at the examples here:
http://hertzen.com/experiments/jsfeedback/
edit
The html2canvas script is now available separately here and some examples here.
edit 2
Another confirmation that Google uses a very similar method (in fact, based on the documentation, the only major difference is their async method of traversing/drawing) can be found in this presentation by Elliott Sprehn from the Google Feedback team:
http://www.elliottsprehn.com/preso/fluentconf/
Your web app can now take a 'native' screenshot of the client's entire desktop using getUserMedia():
Have a look at this example:
https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/
The client will have to be using chrome (for now) and will need to enable screen capture support under chrome://flags.
PoC
As Niklas mentioned you can use the html2canvas library to take a screenshot using JS in the browser. I will extend his answer in this point by providing an example of taking a screenshot using this library ("Proof of Concept"):
function report() {
let region = document.querySelector("body"); // whole screen
html2canvas(region, {
onrendered: function(canvas) {
let pngUrl = canvas.toDataURL(); // png in dataURL format
let img = document.querySelector(".screen");
img.src = pngUrl;
// here you can allow user to set bug-region
// and send it with 'pngUrl' to server
},
});
}
.container {
margin-top: 10px;
border: solid 1px black;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/0.4.1/html2canvas.min.js"></script>
<div>Screenshot tester</div>
<button onclick="report()">Take screenshot</button>
<div class="container">
<img width="75%" class="screen">
</div>
In report() function in onrendered after getting image as data URI you can show it to the user and allow him to draw "bug region" by mouse and then send a screenshot and region coordinates to the server.
In this example async/await version was made: with nice makeScreenshot() function.
UPDATE
Simple example which allows you to take screenshot, select region, describe bug and send POST request (here jsfiddle) (the main function is report()).
async function report() {
let screenshot = await makeScreenshot(); // png dataUrl
let img = q(".screen");
img.src = screenshot;
let c = q(".bug-container");
c.classList.remove('hide')
let box = await getBox();
c.classList.add('hide');
send(screenshot,box); // sed post request with bug image, region and description
alert('To see POST requset with image go to: chrome console > network tab');
}
// ----- Helper functions
let q = s => document.querySelector(s); // query selector helper
window.report = report; // bind report be visible in fiddle html
async function makeScreenshot(selector="body")
{
return new Promise((resolve, reject) => {
let node = document.querySelector(selector);
html2canvas(node, { onrendered: (canvas) => {
let pngUrl = canvas.toDataURL();
resolve(pngUrl);
}});
});
}
async function getBox(box) {
return new Promise((resolve, reject) => {
let b = q(".bug");
let r = q(".region");
let scr = q(".screen");
let send = q(".send");
let start=0;
let sx,sy,ex,ey=-1;
r.style.width=0;
r.style.height=0;
let drawBox= () => {
r.style.left = (ex > 0 ? sx : sx+ex ) +'px';
r.style.top = (ey > 0 ? sy : sy+ey) +'px';
r.style.width = Math.abs(ex) +'px';
r.style.height = Math.abs(ey) +'px';
}
//console.log({b,r, scr});
b.addEventListener("click", e=>{
if(start==0) {
sx=e.pageX;
sy=e.pageY;
ex=0;
ey=0;
drawBox();
}
start=(start+1)%3;
});
b.addEventListener("mousemove", e=>{
//console.log(e)
if(start==1) {
ex=e.pageX-sx;
ey=e.pageY-sy
drawBox();
}
});
send.addEventListener("click", e=>{
start=0;
let a=100/75 //zoom out img 75%
resolve({
x:Math.floor(((ex > 0 ? sx : sx+ex )-scr.offsetLeft)*a),
y:Math.floor(((ey > 0 ? sy : sy+ey )-b.offsetTop)*a),
width:Math.floor(Math.abs(ex)*a),
height:Math.floor(Math.abs(ex)*a),
desc: q('.bug-desc').value
});
});
});
}
function send(image,box) {
let formData = new FormData();
let req = new XMLHttpRequest();
formData.append("box", JSON.stringify(box));
formData.append("screenshot", image);
req.open("POST", '/upload/screenshot');
req.send(formData);
}
.bug-container { background: rgb(255,0,0,0.1); margin-top:20px; text-align: center; }
.send { border-radius:5px; padding:10px; background: green; cursor: pointer; }
.region { position: absolute; background: rgba(255,0,0,0.4); }
.example { height: 100px; background: yellow; }
.bug { margin-top: 10px; cursor: crosshair; }
.hide { display: none; }
.screen { pointer-events: none }
<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/0.4.1/html2canvas.min.js"></script>
<body>
<div>Screenshot tester</div>
<button onclick="report()">Report bug</button>
<div class="example">Lorem ipsum</div>
<div class="bug-container hide">
<div>Select bug region: click once - move mouse - click again</div>
<div class="bug">
<img width="75%" class="screen" >
<div class="region"></div>
</div>
<div>
<textarea class="bug-desc">Describe bug here...</textarea>
</div>
<div class="send">SEND BUG</div>
</div>
</body>
Get screenshot as Canvas or Jpeg Blob / ArrayBuffer using getDisplayMedia API:
FIX 1: Use the getUserMedia with chromeMediaSource only for Electron.js
FIX 2: Throw error instead return null object
FIX 3: Fix demo to prevent the error: getDisplayMedia must be called from a user gesture handler
// docs: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
// see: https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/#20893521368186473
// see: https://github.com/muaz-khan/WebRTC-Experiment/blob/master/Pluginfree-Screen-Sharing/conference.js
function getDisplayMedia(options) {
if (navigator.mediaDevices && navigator.mediaDevices.getDisplayMedia) {
return navigator.mediaDevices.getDisplayMedia(options)
}
if (navigator.getDisplayMedia) {
return navigator.getDisplayMedia(options)
}
if (navigator.webkitGetDisplayMedia) {
return navigator.webkitGetDisplayMedia(options)
}
if (navigator.mozGetDisplayMedia) {
return navigator.mozGetDisplayMedia(options)
}
throw new Error('getDisplayMedia is not defined')
}
function getUserMedia(options) {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
return navigator.mediaDevices.getUserMedia(options)
}
if (navigator.getUserMedia) {
return navigator.getUserMedia(options)
}
if (navigator.webkitGetUserMedia) {
return navigator.webkitGetUserMedia(options)
}
if (navigator.mozGetUserMedia) {
return navigator.mozGetUserMedia(options)
}
throw new Error('getUserMedia is not defined')
}
async function takeScreenshotStream() {
// see: https://developer.mozilla.org/en-US/docs/Web/API/Window/screen
const width = screen.width * (window.devicePixelRatio || 1)
const height = screen.height * (window.devicePixelRatio || 1)
const errors = []
let stream
try {
stream = await getDisplayMedia({
audio: false,
// see: https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamConstraints/video
video: {
width,
height,
frameRate: 1,
},
})
} catch (ex) {
errors.push(ex)
}
// for electron js
if (navigator.userAgent.indexOf('Electron') >= 0) {
try {
stream = await getUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
// chromeMediaSourceId: source.id,
minWidth : width,
maxWidth : width,
minHeight : height,
maxHeight : height,
},
},
})
} catch (ex) {
errors.push(ex)
}
}
if (errors.length) {
console.debug(...errors)
if (!stream) {
throw errors[errors.length - 1]
}
}
return stream
}
async function takeScreenshotCanvas() {
const stream = await takeScreenshotStream()
// from: https://stackoverflow.com/a/57665309/5221762
const video = document.createElement('video')
const result = await new Promise((resolve, reject) => {
video.onloadedmetadata = () => {
video.play()
video.pause()
// from: https://github.com/kasprownik/electron-screencapture/blob/master/index.js
const canvas = document.createElement('canvas')
canvas.width = video.videoWidth
canvas.height = video.videoHeight
const context = canvas.getContext('2d')
// see: https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight)
resolve(canvas)
}
video.srcObject = stream
})
stream.getTracks().forEach(function (track) {
track.stop()
})
if (result == null) {
throw new Error('Cannot take canvas screenshot')
}
return result
}
// from: https://stackoverflow.com/a/46182044/5221762
function getJpegBlob(canvas) {
return new Promise((resolve, reject) => {
// docs: https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob
canvas.toBlob(blob => resolve(blob), 'image/jpeg', 0.95)
})
}
async function getJpegBytes(canvas) {
const blob = await getJpegBlob(canvas)
return new Promise((resolve, reject) => {
const fileReader = new FileReader()
fileReader.addEventListener('loadend', function () {
if (this.error) {
reject(this.error)
return
}
resolve(this.result)
})
fileReader.readAsArrayBuffer(blob)
})
}
async function takeScreenshotJpegBlob() {
const canvas = await takeScreenshotCanvas()
return getJpegBlob(canvas)
}
async function takeScreenshotJpegBytes() {
const canvas = await takeScreenshotCanvas()
return getJpegBytes(canvas)
}
function blobToCanvas(blob, maxWidth, maxHeight) {
return new Promise((resolve, reject) => {
const img = new Image()
img.onload = function () {
const canvas = document.createElement('canvas')
const scale = Math.min(
1,
maxWidth ? maxWidth / img.width : 1,
maxHeight ? maxHeight / img.height : 1,
)
canvas.width = img.width * scale
canvas.height = img.height * scale
const ctx = canvas.getContext('2d')
ctx.drawImage(img, 0, 0, img.width, img.height, 0, 0, canvas.width, canvas.height)
resolve(canvas)
}
img.onerror = () => {
reject(new Error('Error load blob to Image'))
}
img.src = URL.createObjectURL(blob)
})
}
DEMO:
document.body.onclick = async () => {
// take the screenshot
var screenshotJpegBlob = await takeScreenshotJpegBlob()
// show preview with max size 300 x 300 px
var previewCanvas = await blobToCanvas(screenshotJpegBlob, 300, 300)
previewCanvas.style.position = 'fixed'
document.body.appendChild(previewCanvas)
// send it to the server
var formdata = new FormData()
formdata.append("screenshot", screenshotJpegBlob)
await fetch('https://your-web-site.com/', {
method: 'POST',
body: formdata,
'Content-Type' : "multipart/form-data",
})
}
// and click on the page
Here is a complete screenshot example that works with chrome in 2021. The end result is a blob ready to be transmitted. Flow is: request media > grab frame > draw to canvas > transfer to blob. If you want to do it more memory efficient explore OffscreenCanvas or possibly ImageBitmapRenderingContext
https://jsfiddle.net/v24hyd3q/1/
// Request media
navigator.mediaDevices.getDisplayMedia().then(stream =>
{
// Grab frame from stream
let track = stream.getVideoTracks()[0];
let capture = new ImageCapture(track);
capture.grabFrame().then(bitmap =>
{
// Stop sharing
track.stop();
// Draw the bitmap to canvas
canvas.width = bitmap.width;
canvas.height = bitmap.height;
canvas.getContext('2d').drawImage(bitmap, 0, 0);
// Grab blob from canvas
canvas.toBlob(blob => {
// Do things with blob here
console.log('output blob:', blob);
});
});
})
.catch(e => console.log(e));
Heres an example using: getDisplayMedia
document.body.innerHTML = '<video style="width: 100%; height: 100%; border: 1px black solid;"/>';
navigator.mediaDevices.getDisplayMedia()
.then( mediaStream => {
const video = document.querySelector('video');
video.srcObject = mediaStream;
video.onloadedmetadata = e => {
video.play();
video.pause();
};
})
.catch( err => console.log(`${err.name}: ${err.message}`));
Also worth checking out is the Screen Capture API docs.
You can try my new JS library: screenshot.js.
It's enable to take real screenshot.
You load the script:
<script src="https://raw.githubusercontent.com/amiad/screenshot.js/master/screenshot.js"></script>
and take screenshot:
new Screenshot({success: img => {
// callback function
myimage = img;
}});
You can read more options in project page.
I want to do some parallel computation and I'm getting a really strange java.lang.NullPointerException on calling ANY functions outside the object I have.
Take a look:
case class Return(session: String, job: Int)
case class Ready(n: Int)
case class DoJob(session: String, job: Int)
case class NotReady
object Notifications extends Controller with Secure {
class AtorMeio extends Actor {
import scala.collection.mutable.{Map => MMap}
val job: MMap[(String, Int), Option[Int]] = MMap()
def act {
loop {
react {
case DoJob(session, jobn) =>
if (job.get((session, jobn)).isEmpty) {
jobn match {
case 1 =>
job.put((session, jobn), None)
val n = Messaging.oi //Messaging.retrieveNumberOfMessages(new FlagTerm(new Flags(Flags.Flag.SEEN), false))
job.put((session, jobn), Some(n))
case 2 =>
// do!
}
}
case Return(session, jobn) =>
if (job.get((session, jobn)).isDefined && job.get((session, jobn)).get.isDefined) {
val ret = job.get((session, jobn)).get.get
job.remove((session, jobn))
reply(Ready(ret))
}
else
reply(NotReady)
}
}
}
}
private var meuator: AtorMeio = null
lazy val ator = {
if (Option(meuator).isEmpty) {
meuator = new AtorMeio
meuator.start
}
meuator
}
def pendingNotifications = {
ator ! DoJob(session.getId, 1)
ator !? Return(session.getId, 1) match {
case Ready(ret) =>
if (ret.toString != Option[String](params.get("current")).getOrElse("-1")) "true" else Suspend("80s")
case _ =>
}
}
}
I'm getting an error in executing Messaging.oi which is basically an object with:
def oi = 4
Here is the stacktrace:
controllers.Notifications$AtorMeio#1889d53: caught java.lang.NullPointerException
java.lang.NullPointerException
at controllers.Messaging$.oi(Messaging.scala:108)
at controllers.Notifications$AtorMeio$$anonfun$act$1$$anonfun$apply$1.apply(Notifications.scala:38)
at controllers.Notifications$AtorMeio$$anonfun$act$1$$anonfun$apply$1.apply(Notifications.scala:31) at scala.actors.ReactorTask.run(ReactorTask.scala:34)
at scala.actors.ReactorTask.compute(ReactorTask.scala:66)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:147)
at scala.concurrent.forkjoin.ForkJoinTask.quietlyExec(ForkJoinTask.java:422)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.mainLoop(ForkJoinWorkerThread.java:340)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:325)
Line 108 is exactly this oneliner def. Ahh entrance point is def pendingNotifications.
Anyone can help? Thanks a lot!
Have you tried replacing
private var meuator: AtorMeio = null
by either:
private var meuator: AtorMeio = None
Configure your breakpoints view in your debugger to halt/break on NullPointerExceptions ...
And: you did see you set this to null here:
private var meuator: AtorMeio = null
?? Or?
Ok people, after digging a lot I discovered the problem: Somehow, somewhere if you have the "Controller" class from play framework mixed in, it crashes mercifully. So I just wrapped this thing into a 'clean' class and it worked.