The Global Archicad Community

Stay informed. Get help. Share your knowledge.

Hardware specific issues - computers, graphics cards, mice/input devices, system benchmarks, protection key issues, etc.

Moderators: Karl Ottenstein, LaszloNagy, ejrolon, Barry Kelly, gkmethy

#247157
Dear graphisoft,

my client is using the new Archicad 19 and just went through an upgrade from
Win8.1 to Win10
GTX 770 2GB to GTX 980 4GB

Let me list the specifications for each card first:
GTX 770
2GB VRAM,
1536 shading units,
3.2GFLOPS Floatingpoint performance

GTX 980
4GB VRAM,
2048 shading units,
4.6GFLOPS Floatingpoint performance

We did document the 770 performance before the upgrade on a normal neigbourhood model, with a couple of trees and houses with enabled sun shadows ofc.
before:
We had aprox. 55-60FPS orbiting around the model on the 770
after:
We have in the same model orbiting at same distance 50-55 on 980
** also the redraw cicle for shadows seems to be longer/slower

I demand an explanation for my client, why was his upgrade in vain (wasted money)
he is using a brand new 4790K cpu with 4.4GHz and 16GB of ram so there is nothing such as CPU botleneck.

Let me give you a note, that its not a problem on our side, because we run a test in Maya and we got a performance increse from 69fps to 116fps which was expected.

I took a look at this: http://helpcenter.graphisoft.com/techno ... 1_64_bit-2

Let me tell you in a proper maner that this is not helpful AT ALL, and the informations is not correct I think.
Just look at it yourselfes. What does this FPS ration thing even mean?

Let us take a look at 10M polygon test in Win8.1:
Image
Image
So you claim that a gtx 560 (kepler) is faster as a brand new K2200 (maxwell) quadro and at the same time the gtx 750 (maxwell) is 5x slower ???!!
Now just a fast check:
-gtx -560 - 1GB vram - 336 shading units - 1.1 GFLOPS - kepler
-gtx -750 - 1GB vram - 512 shading units - 1.1 GFLOPS - maxwell
gtx 750Ti - 2GB vram - 640 shading units - 1.3 GFLOPS - maxwell
K2200 ---- 4GB vram - 640 shading units - 1.3 GFLOPS - maxwell
for contrast the two above
gtx 770 ---2GB vram - 1536 shadin units - 3.2 GFLOPS - kepler
gtx 980 ---4GB vram - 2048 shadin units - 4.6 GFLOPS - maxwell

Does this make any sense to you ?

My client who upgraded because your suposedly new faster Optimised Opengl ,wants an explanation from me, and I want from you...

With this logic it seems that a gtx 560 from 2011 which is selling on ebay for 50€ is faster than a brand new K2200, gtx 980 and even gtx 770 ??

So should my client now throw away his expensive 980 and buy a K2200 for even more, so maybe it will be faster than a 50€ 560, altough your images are showing that its actually slower ?

I hope this is an official Archicad support forum, because I think that me myself and a lot of people need some real explanations here
#247163
This is not an official ARCHICAD support forum.
This is an ARCHICAD user forum where users help each other.
Many times people from Graphisoft join the conversation, but it can neither be expected or demanded.
Official support questions should be directed to your local reseller.

I can offer a few thought of my own:

- In the article GRAPHISOFT states:
“Professional” vs. “Gaming” video cards

Graphic card manufacturers typically have different product lines targeted for “gamers” and “professional users”. While the hardware setup is very similar for both cards, there are main differences in their firmware and driver. “Gamer” cards (such as Nvidia GeForce and AMD/ATI Radeon) are optimized for 3D games, where speed is more important than image quality. Typically a 3D model in a game contains a low number of polygons with textures applied to them, while polygon count is high in CAD modeling, and the quality of the stationary image of the wireframe or the shaded model is more important than navigation speed. Also, professional cards’ memory usage is optimized for using multiple application windows, while in gaming this is not relevant. Another key difference between the two product lines is the way they are delivered and supported. Professional cards (such as Nvidia Quadro and AMD FirePro ) are built according to the video chip manufacturer’s references, so you can always be sure that the driver delivered by the video chip manufacturer (e.g. Nvidia or AMD) fits your card. The manufacturers of gaming cards (such as ASUS, Sapphire, Gigabyte, PNY, etc…) may diverge from the chip manufacturer’s references, so their drivers may not be compatible with the chip manufacturer’s reference driver.
It is practically impossible to test all the various gaming cards from the multitude of gaming card manufacturers with all their special tweaks and overclocking and various drivers that are different from the reference drivers provided by NVidia or AMD.
I think this could be one of the reasons why in that table the 560 card is much faster than the 750.

Another thought I have is that this is under Windows 10, an operating system that came out only 2 months ago. So the quality and speed of drivers will probably improve. So if I understand correctly, you had GTX 770 2GB under Win 8.1 and now you have GTX 980 4GB under Win10. So the driver of the old system could be pretty well polished and optimized by now on Win 8.1 and still not very optimized under Win 10. Just another guess of mine.

But the main point in the above is that with gaming cards it is really not that easy to give any official recommendations because of the huge number of variables involved. The same card could be slow with a driver, then fast with a newer driver.

Maya results may or may not be relevant, they may have specifically developed a new Win 10 driver for Maya and this is why it gives a higher FPS rate (as expected).
#247169
You are missing the point a little....

Nvidia is using practically the same drivers for 8.1 and 10, because you can try for yourself to install the 8.1 on 10 and vice versa...

But how can a 560 be even faster than a K2200 acording to graphisoft table, which is almost "new" ??

So what should we do?

1. sell the 980 and get back the 770

2. try to find a very old 560

3. buy an even more expensive K2200

We are at a loss here anyway now and the tables proved to be very wrong. Also there in no performance difference in other aplications between win8.1 and win10. Even autocad or 3dsmax.

I am an IT caretaker and now I have a very dissapointed client and there are more clients who want to upgrade, but there is absolutely no way to find accurate information.

Also I think that graphisoft is big enough to test at least 5 cards on a yearly basis. Even we could do that, if we had the tools (real bencharks) and the time to do so.

Thank you anyway.

PS:
I hope to get at least some real information, but after searching very wide here and on the internet, this seems to be impossible. Also the support gives no direct information to my quiestions.
#247236
If you are building a workstation for a CAD software using client and are using gaming cards, I think you might want to research things a bit.

This is not an ArchiCAD exclusive 'problem'.

http://graitec.co.uk/hardware/cad-works ... hics-cards
http://graitec.co.uk/hardware/cad-works ... g-graphics

here is some information.
#247267
I think you should research, why other software developers started to optimise their applications for consumer grade cards... since 2009

Or just buy a quadro and see for yourself

All that matters are REAL numbers, not pages of quadro/firepro propaganda - without any real numbers.

Image

Image

Image

Furryball = mental ray = iray = RT stuff:
http://www.aaa-studio.cz/furrybench/benchResults4.php

Want more Opengl ?
http://cbscores.com/index.php?sort=ogl&order=desc

One more Photoshop GPU accelerated effects sumasummarum:
Image


here you go...

When I was young I worked with quadro cards very slow and time consuming until I bought a gtx 580 and it was waaaaaay faster in everything.

Do you mind explaining the graphisoft logic, which does not apply to any of REAL numbers ??
speed:
gtx 560 > K2200 5x > 750
price:
K2200 (600e) > gtx 750 (100e) > 560 (50€)
arhchitecture:
K2200 == 750
560 == KEPLER 2011 gaming

Do you even know how Realhack works?
#247274
Be careful when viewing such 'performance' charts, many of the benchmarks are just as much cpu bound as gpu bound.
If you choose to use a card that is not within the recommended specifications of the program you are using it for you take your chances.
My experience in ArchiCAD for many (18+) years is that a professional quadro card gives a good level of performance and a better image / texture quality.

Regards,
Scott
#247278
Our current workstations have Quadro K2000 cards. These work fine and are recommended by the company that does our IT work. It's a fairly affordable piece of hardware. We are running Windows 7 on these machines. All together a very stable setup.

ArchiCADs competitors (Revit, Allplan, Solidworks, etc) all have the same support policy for professional v.s. gaming cards as far as I know.

I do not know how Realhack works, I also do not need my workspace 3D to be shiny, I need to be able to work on fairly high poly models without delays in rebuild. In this department the quadro K2000 delivers without problems on ArchiCAD.

2d views also work without any delays with zooming, panning, trace and reference etc.

I don't even recall when we last updated the driver, it still works fine even with ArchiCAD 19.

It is stable and in a world of deadlines and tough competition I am happy with stable.
#247391
The K2000 is just too slow for todays standards.

Ok I understand, that some of you dont understand why companies forced quadro cards in the past and are today.... I will not start give u an explanation, but lets say it short
its all about licence sales and money... todays technology of viewports is not dependent on custom old opengl interface, which is on purpose limiting new high end gpus to only use 50% of the gpu... Its done on purpose.

As long as you did not try other sofware and notice the difference between a K2000 and 780Ti = lets say u are just blindly accepting the facts they are serving you.

but u refuse to believe independent proof on the internet.... which is not financed by sw devs.

Oh just fired up some aftereffects and rendered a scene on a 780Ti and it was 10x faster that on Q4000... that enough proof for me.

Lets finish it with that archicad has the slowest 3D viewport among all 3D capable software, but dont mind solidworks it also works faster on a 280x than a three times more expensive firepro.

years ago there were softmods and hwmods to avoid this...

I'll just start recommending autodesk product, they don't limit high end hardware and it works on any GPU and the speed scales equivalent to GFLOPS performance of a card.

Just wanted to hear some real info, but all u guys can do is just cite and read all the same promo material that is signed by two companies....

At least there are DEVS that know if a SW is not limited by GPU it will sell more...
Bye

For 2D sketchup layout is enough it works wonders even on integrated gpus..... no need for archicad licence,
#248193
I wish Graphisoft would publish a list of it's current limitations with regard to CPU's,GPU's, DirectX, Open GL versions, etc...

We all want to use the best hardware we can afford, but we do not want to buy equipment that can not be used with ArchiCAD without knowing it. I am using Maxwell Render so I have a reason for using a graphics card that would otherwise be a waste of money if I were using it only with ArchiCAD and CineRender.

If I buy modern NVIDIA GPU graphics card made for use with DirectX 12, Open GL 4.5, is it a waste of money for use with ArchiCAD 20? Will it even work with ArchiCAD?

If it were a difficult thing to know I would feel bad about asking. But I think people at Graphisoft do know. And they could provide good answers. Is it because they are embarrassed to expose the limitations of ArchiCAD? I don't know because I am relatively ignorant about computer technology.

I do know enough to recognize that there are some limitations with ArchiCAD with regard to using the latest hi-end computer processors and graphics cards. I want to know specifically what those limitations are and how long I can expect them to be a factor so that I can make an informed choice about buying new equipment. Is that too much to ask?