Posts

lavapipe reporting Vulkan 1.1 (not compliant)

The lavapipe vulkan software rasterizer in Mesa is now reporting Vulkan 1.1 support. It passes all CTS tests for those new features in 1.1 but it stills fails all the same 1.0 tests so isn't that close to conformant. (lines/point rendering are the main areas of issue). There are also a bunch of the 1.2 features implemented so that might not be too far away though 16-bit shader ops and depth resolve are looking a bit tricky. If there are any specific features anyone wants to see or any crazy places/ideas for using lavapipe out there, please either file a gitlab issue or hit me up on twitter @DaveAirlie

crocus: gallium for the gen4-7 generation

The crocus project was recently mentioned in a phoronix article . The article covered most of the background for the project. Crocus is a gallium driver to cover the gen4-gen7 families of Intel GPUs. The basic GPU list is 965, GM45, Ironlake, Sandybridge, Ivybridge and Haswell, with some variants thrown in. This hardware currently uses the Intel classic 965 driver. This is hardware is all gallium capable and since we'd like to put the classic drivers out to pasture, and remove support for the old infrastructure, it would be nice to have these generations supported by a modern gallium driver. The project was initiated by Ilia Mirkin last year, and I've expended some time in small bursts to moving it forward. There have been some other small contributions from the community. The basis of the project is a fork of the iris driver with the old relocation based batchbuffer and state management added back in. I started my focus mostly on the older gen4/5 hardware since it was simpler

sketchy vulkan benchmarks: lavapipe vs swiftshader

 Mike, the zink dev, mentioned that swiftshader seemed slow at some stuff and I realised I've never expended much effort in checking swiftshader vs llvmpipe in benchmarks. The thing is CPU rendering is pretty much going to top out on memory bandwidth pretty quickly but I decided to do some rough napkin benchmarks using the vulkan samples from Sascha Willems. I'd also thought that due to having a few devs and the fact that it was used instead of mesa by google for lots of things that llvmpipe would be slower since it hasn't really gotten dedicated development resources. I picked a random smattering of Vulkan samples and ran them on my Ryzen  workstation without doing anything else, in their default window size. The first number is lavapipe fps the second swiftshader. gears: 336 309 instancing: 3 3 ssao: 19 9 deferredmultisampling:  11 4 computeparticles: 9 8 computeshader: 73 57 computeshader sharpen: 54 34 I guess the swift is just good marketing name, now I'm not sure

Linux graphics, why sharing code with Windows isn't always a win.

A recent article on phoronix has some commentary about sharing code between Windows and Linux, and how this seems to be a metric that Intel likes. I'd like to explore this idea a bit and explain why I believe it's bad for Linux based distros and our open source development models in the graphics area. tl;dr there is a big difference between open source released and open source developed projects in terms of sustainability and community. The Linux graphics stack from a distro vendor point of view is made up of two main projects, the Linux kernel and Mesa userspace. These two projects are developed in the open with completely open source vendor agnostic practices. There is no vendor controlling either project and both projects have a goal of try to maximise shared code and shared processes/coding standards across drivers from all vendors. This cross-vendor synergy is very important to the functioning ecosystem that is the Linux graphics stack. The stack also relies in some place

llvmpipe is OpenGL 4.5 conformant.

(I just sent the below email to mesa3d developer list). Just to let everyone know, a month ago I submitted the 20.2 llvmpipe driver for OpenGL 4.5 conformance under the SPI/X.org umbrella, and it is now official[1]. Thanks to everyone who helped me drive this forward, and to all the contributors both to llvmpipe and the general Mesa stack that enabled this. Big shout out to Roland Scheidegger for helping review the mountain of patches I produced in this effort. My next plans involved submitting lavapipe for Vulkan 1.0, it's at 99% or so CTS, but there are line drawing, sampler accuracy and some snorm blending failure I have to work out. I also ran the OpenCL 3.0 conformance suite against clover/llvmpipe yesterday and have some vague hopes of driving that to some sort of completion. (for GL 4.6 only texture anisotropy is really missing, I've got patches for SPIR-V support, in case someone was feeling adventurous). Dave. [1] https://www.khronos.org/confor m

lavapipe: a *software* swrast vulkan layer FAQ

(project was renamed from vallium to lavapipe) I had some requirements for writing a vulkan software rasterizer within the Mesa project. I took some time to look at the options and realised that just writing a vulkan layer on top of gallium's llvmpipe would be a good answer for this problem. However in doing so I knew people would ask why this wouldn't work for a hardware driver. tl;dr DO NOT USE LAVAPIPE OVER A GALLIUM HW DRIVER, What is lavapipe? The lavapipe layer is a gallium frontend. It takes the Vulkan API and roughly translates it into the gallium API. How does it do that? Vulkan is a lowlevel API, it allows the user to allocate memory, create resources, record command buffers amongst other things. When a hw vulkan driver is recording a command buffer, it is putting hw specific commands into it that will be run directly on the GPU. These command buffers are submitted to queues when the app wants to execute them. Gallium is a context level API, i.e. like OpenGL/D3D10. Th

DirectX on Linux - what it is/isn't

This morning I saw two things that were Microsoft and Linux graphics related. https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/ a) DirectX on Linux for compute workloads b) Linux GUI apps on Windows At first I thought these were related, but it appears at least presently these are quite orthogonal projects. First up clarify for the people who jump to insane conclusions: The DX on Linux is a WSL2 only thing. Microsoft are not any way bringing DX12 to Linux outside of the Windows environment. They are also in no way open sourcing any of the DX12 driver code. They are recompiling the DX12 userspace drivers (from GPU vendors) into Linux shared libraries, and running them on a kernel driver shim that transfers the kernel interface up to the closed source Windows kernel driver. This is in no way useful for having DX12 on Linux baremetal or anywhere other than in a WSL2 environment. It is not useful for Linux gaming. Microsoft have sub