1👍
To do what you want, we can take advantage of the WebRTC features and APIs that modern browsers supports. It allow devloppers to get user media (webcam) without using flash that is no more supported, and to share user’s screen. It’s the later that we want to use.
Requirements
The given solution will work on browsers that supports WebRTC (newest Chrome, Firefox, Edge and Opera will work for sure).
As for any WebRTC feature, you MUST work on an HTTPS page. Even for local development. Without https, the features will be disabled and your browser will prompt an error in the console. For security sakes, using it into Iframes can be tricky. For instance, a code snippet on stackoverflow will not work because of same origin policy.
Obviously, the user must explicitly give it’s permission and will have to choose the screen that he wants to share and even if he wants to share only a window, a browser tab (inner document only) or the full screen. You can’t overcome that decision, it only belong to the user.
For more infos, see this Screen Capture API documentation on MDN
A working demo
So, since we can get the real-time user’s screen as a stream, the idea is to attach this stream to a video, and then capture a video frame into a canvas. From a canva, you can get all the data you need. Send it to a server, create a Blob and allow your user to download it or whatever.
<html lang="en">
<body>
<video id="video" style="display:none;" autoplay></video>
<canvas id="screenshot" width="600px" height="400px"></canvas>
<button id="capture">Take screenshot</button>
<script>
const videoElem = document.getElementById("video");
const canvas = document.getElementById("screenshot");
const context = canvas.getContext('2d');
//We don't want the audio, and we want to see the user's cursor
const displayMediaOptions = {
video: {
cursor: "always"
},
audio: false
};
async function takeScreenshot() {
try {
//There, we get a full screen capture in real time using the WebRTC API.
//And we set the video source with the stream in srcObject.
videoElem.srcObject = await navigator.mediaDevices.getDisplayMedia(
displayMediaOptions
);
} catch(err) {
console.error("Error: " + err, err);
}
}
//We want to take our screenshot when our stream is ready
videoElem.addEventListener('playing', () => {
//If we are too fast, we will get the browser-prompted window
//in the screenshot, so we add a little delay.
setTimeout(() => {
context.drawImage(videoElem, 0, 0, canvas.width, canvas.height);
//We got our screenshot, we can stop to capture the user's screen
let tracks = videoElem.srcObject.getTracks();
tracks.forEach(track => track.stop());
videoElem.srcObject = null;
video.pause();
}, 500);
});
document.getElementById('capture').addEventListener('click', takeScreenshot);
</script>
</body>
</html>
It’s the simplest way to do it. It’s not perfect and there is room for improvment, but you should get the idea.
- [Vuejs]-Retrieving an image from firebase storage to a vue app
- [Vuejs]-Token is missing in my vue app
-2👍
This is out of scope of the browser, it’s only possible for it to capture within the browser boundaries only. This is safety measurements by design. You might however be able to find an extension that does that for you.