Author Topic: Future of powerbasic  (Read 212734 times)

0 Members and 1 Guest are viewing this topic.

Offline Brice Manuel

  • Full Member
  • ***
  • Posts: 154
  • User-Rate: +0/-0
Re: Future of powerbasic
« Reply #315 on: December 07, 2014, 03:28:47 AM »
Theo,
I have read most of your posts and Brice is correct.
You say you want a 64-bit compiler, you get it... then you say everything negative that you can!   ???

I am a few days away from my 11th year anniversary of using PureBasic.  During that time it has continued to grow in functionality and the number of platforms supported (some old platforms like the Amiga and the PowerPC Macs have been dropped) and it has continued to adhere to traditional BASIC as much as possible.  All this has been done with one price and not being subjected to a shakedown for more money every year or two for things that should have been included to begin with.  A language that continually grows and is actively supported will always attract new users eliminating the need for trying to make all of your money off of existing customers instead of seeking new ones.  The introduction of the LTS version has been VERY welcomed as it allows you to use the stable version for any serious work, but still play with the "bleeding edge" version so you can get used to the new features when the time comes that you will need those features.  The only weak point for me is I dislike the included GUI designer and there is no longer a decent third-party GUI designer being sold.  Most people simply don't use a GUI designer for their GUIs, so I am definitely in the minority with this "dislike".  That said there is nothing wrong with the included GUI designer and those that use it love it.  I am just very picky when it comes to GUI designers. 

For those who do NOT want a BASIC compiler that properly supports modern hardware and technology and would prefer a legacy BASIC that never really progresses, True BASIC is still being sold and actively developed and it is the original BASIC (all other BASICs have been imitators).  I would never recommend it for anything but hobby use and I would never recommend it for anything you intend to actually release (even for free).

Offline Bob Houle

  • Newbie
  • *
  • Posts: 41
  • User-Rate: +3/-3
  • "It was too lonely at the top."
Re: Future of powerbasic
« Reply #316 on: December 07, 2014, 06:48:39 PM »
Do you know of any PureBasic demo(s) that would knock my socks off ?

All the demos i have ever seen are looking more like those of DOS days.

If you have any screen shot(s) or link(s) that you could share, i would be very happy to revise my opinion.

Thanks

...

Patrice,

I've been a user of your Winlift program (2003), so I know you don't impress easily. {grin}

Would the ability to use PostgreSQL or SQLite out of the box knock your socks off... it doubt it.

But that's my point... as a 'bare metal' programming tool PureBasic probably outshines PowerBASIC, only because it provides much more OUT-OF-THE-BOX.

But, I've included a few graphical examples (Zipped) to show what's possible...

Offline José Roca

  • Administrator
  • Hero Member
  • *****
  • Posts: 2481
  • User-Rate: +204/-0
Re: Future of powerbasic
« Reply #317 on: December 07, 2014, 10:40:37 PM »
Quote
Would the ability to use PostgreSQL or SQLite out of the box knock your socks off... it doubt it.

I already have headers and a class for SQLite, and translating the headers of PostgreSQL would not be a difficult task.

My problem with these cross-platform compilers, such PureBasic and FreeBasic, is that they are not well suited for the kind of programming that I do.

Offline Theo Gottwald

  • Administrator
  • Hero Member
  • *****
  • Posts: 918
  • User-Rate: +30/-4
    • it-berater
Re: Future of powerbasic
« Reply #318 on: December 08, 2014, 09:34:13 AM »
Why do you get religiouse here, folks?
In fact i did not say anything negative. A Macro Assembler is nothing negative.

Now think a minute. If Fred would make Purebasic a second time - would he do it the same way?
I think not. He started that time, and what you get is a large toolbox.

What was exactly negative?
A Macro Assembler is one of the best programming tools you can get and PureBasic is built on one "as an enhancement" somehow.
And it does not go so far above it if you look into difficult to compile code.
My problem with it was that just any time when i used it, i crashed into some limitations that a "real compiler" like PB doesn't have.

If i use Gosub and i run into an error - due to the stack frame that PB uses, the compiler will clean it up when i leave the procedure.
And of course i can use real GOSUB/RETURN. I use it very often.
All needed types of variables are there. Yes some types were later added in Purebasic, but it did not look to me as if it really fits together like in PowerBasic.

And the String-Engine in Powerbasic saves me most of the problems i have in other languages.
I have tried that in PureBaisc as well, but it never worked for me as expected.
The Strings are just not like the PowerBasic strings.

Having said that about the "Core" of the compiler, (and thats where i would like to see changes),
i add that the many libraries from all sorts of uses are a great tool set for people who want to build applications in some sort of "construction set" style.

And while the system is different from PowerBasic, its still usabel and i have also done some applications in PureBasic where i needed 64 bit.
Its just different. And ... yes i prefer PowerBasic.

Let me add that if i would use Purebasic for so long time like others, possibly i would know workarounds for most problems.
Its like in any sort of programming language. At the end its the programmer - not the tool.
« Last Edit: December 08, 2014, 09:40:47 AM by Theo Gottwald »

Offline Steve Hutchesson

  • Jr. Member
  • **
  • Posts: 83
  • User-Rate: +6/-5
    • The MASM Forum
Re: Future of powerbasic
« Reply #319 on: January 16, 2015, 03:03:44 PM »
I confess to being very disappointed in what has happened with the PB forum lately. I see Gary as a good guy who tried very hard to get the SDK/API subforum going and we saw code from Jose, Patrice, a number of the PB forum members and I managed to get a bit of stuff done as well but the endless trolling, arguments, insults and general influence peddling have done the damage and very few are game to post API based code any longer as the talking heads just put the boot into it.

Like Jose, I am not dependent on the PB forum and can post PB example in the MASM forum which is now a better choice as no nonsense is allowed in the MASM forum at all. What has p*ssed me off the most is there have been a lot of very good programmers in the PB forum in the past who faded away with the flooding of DDT code who could have come back and posted decent API based code but with the endless cr*p going down, they just stopped bothering or in fact did not bother at all.


Offline Steve Hutchesson

  • Jr. Member
  • **
  • Posts: 83
  • User-Rate: +6/-5
    • The MASM Forum
Re: Future of powerbasic
« Reply #320 on: January 16, 2015, 04:41:24 PM »
Patrice,

I don't have any beef with DDT, I think Bob hit the mark of a simplified system for people who could not put in the effort to learn API style coding but we have a very similar view of the folks who thought that peer pressure would ever effect those who put in the effort years ago to learn how to write Windows code properly. I have not won any friends by labeling this nonsense as membership of the "Mickey Mouse Club" but eventually you get tired of people who keep trying to cripple the language to maintain their own flavour of influence peddling.

32 bit PowerBASIC is in its twilight as Bob never had the chance to finish the 64 bit version but it is still a very good tool in the 32 bit area yet trying to promote its advanced features in the PB forum is like p*ssing into the wind and wondering why you get wet. I feel sorry for Gary as he has tried hard to get it going again but the "Mickey Mouse Club" will continue to sob into their chardonnay until it turns into the "Grapes of Wrath" and will take down the viability of the language until there is no-one left.

I think you said it all a long time ago, behaving like "beached whales".  ;D

Offline Theo Gottwald

  • Administrator
  • Hero Member
  • *****
  • Posts: 918
  • User-Rate: +30/-4
    • it-berater
Re: Future of powerbasic
« Reply #321 on: January 19, 2015, 10:37:03 PM »
We had several disputes with Bob in the past.
Most of all, however only people that have been very close to Bob and the PB Company know all of the truth.
What we see as outsiders, is that:
1. About 32 bit - PB is still usable and a very good program
2. About 64 bit we need to look around

How is Charles Pege doing?
Some time ago he told me, he could make his compiler to compile (non-DDT) PB-like Code into 64 bit.
Did anybody test his newest creations?


Offline Steve Hutchesson

  • Jr. Member
  • **
  • Posts: 83
  • User-Rate: +6/-5
    • The MASM Forum
Re: Future of powerbasic
« Reply #322 on: January 20, 2015, 12:24:22 AM »
I basically agree with that, while 32 bit bit is in its twilight there is still a lot of life left in it and the current versions of PB are both very useful tools. Now its not that I am biased but in the 32 bit area, MASM is a truly wonderful tool once you get used to its many bad manners. It has never been softened to a friendly consumer toy, pelts unintelligible error messages at you, has a macro engine that has very few peers but at the price of being only ever vaguely intelligible, buggy and under documented. It was clearly designed for the backroom boys and girls at Microsoft where you had to know its quirks but it does force you to write technically correct code.

64 bit is the future but its not coming all that fast and for compilers it has a lot to do with its incredibly messy stack design. Where 32 PE files had STDCALL, C and any flavour of FASTCALL you wanted to use with a 4 byte aligned stack, Win64 has a 16 byte aligned stack where you first have to allocate stack space under the current location then call API functions using 4 specified registers then the stack for any others while maintaining 16 byte stack alignment. Where 32 bit PE format was designed by the old VAX guys and was clean, clear full 32 bit design, Win64 is a mess something like the old 16 bit NE format was for Win3.?? except that its a 32/64 bit hybrid.

We won't see real 64 bit performance until we have full long mode and hardware with terabytes of memory, 32 gig of ram in Win64 is akin to what 4 meg was in win16, Whoopee !  ;D

Offline Charles Pegge

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 672
  • User-Rate: +27/-1
    • Charles Pegge
Re: Future of powerbasic
« Reply #323 on: January 20, 2015, 07:25:03 AM »

I totally agree, the 64 bit calling conventions are horrid, and break the minimal conformity between Linux systems and Microsoft, which we had with CDECL.

It would be far better, in the long term,  to adopt a 64 bit CDECL as the universal standard for libraries, and leave the kernel developers to do their own thing. I hope this happens with the next generation of hardware (memristors?)

Offline Steve Hutchesson

  • Jr. Member
  • **
  • Posts: 83
  • User-Rate: +6/-5
    • The MASM Forum
Re: Future of powerbasic
« Reply #324 on: January 20, 2015, 01:20:49 PM »
Its not a simple question to answer Patrice as there are both hardware and software considerations. Almost exclusively anything to do with multi-media will do better with base 64 registers and twice as many registers but it is coming at the price of other stuff getting slower. I have 3 quad core boxes handy, the i7 I have win7 64 on, the 3 gig core2 quad which was my last dev box with XP SP3 and a NAS box with XP which is a 2.5 gig Q6600 quad.

I regularly benchmark algos and while SSE is clearly faster on the i7, some algos are faster on the much slower Q6600. later hardware is giving more silicon to SSE and less to the integer instructions that use the 8 GP registers.

Offline Charles Pegge

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 672
  • User-Rate: +27/-1
    • Charles Pegge
Re: Future of powerbasic
« Reply #325 on: January 20, 2015, 01:35:25 PM »
I think the 64 bit calling convention was for the benefit of kernel developers, not application developers, and this is where the speed advantage is gained.  Most higher-level functions will not benefit from passing parameters in volatile registers.

Offline Steve Hutchesson

  • Jr. Member
  • **
  • Posts: 83
  • User-Rate: +6/-5
    • The MASM Forum
Re: Future of powerbasic
« Reply #326 on: January 21, 2015, 10:57:46 AM »
Hi Patrice,

There will certainly be types of code that will show the advantages of 64 bit and most probably the type of advanced work you do will benefit the most from 64 bit but many others will not. I have attached 2 versions of a Microsoft tool called ZOOMIN which is available in the win2000 SDK and while I cannot post the source code due to licencing conditions, I have built both a 32 and 64 bit version using almost identical Microsoft code and about the only difference is the 64 bit version is twice the size for no performance gain.

Win7 64 makes a mess of the selection rectangle display but both versions work. I have attached the two version to compare.

Offline Frederick J. Harris

  • Hero Member
  • *****
  • Posts: 914
  • User-Rate: +16/-0
    • Frederick J. Harris
Re: Future of powerbasic
« Reply #327 on: January 21, 2015, 09:55:36 PM »
Are you saying Patrice that your x64/x32 comparisons use wide characters on the x64 versions and narrow on the x32 versions?  If so I'm thinking that could easily account for a 20% speed difference.

I have done some comparisons myself - particularly involving string buffer manipulations, and my wide character runs are invariably slower than my ansi runs due presumably to buffers and memory allocations being twice as big.  So in my limited tests I'm presumming timing differences due not to x64/x86 differences but between ansi verses wide.

Offline Steve Hutchesson

  • Jr. Member
  • **
  • Posts: 83
  • User-Rate: +6/-5
    • The MASM Forum
Re: Future of powerbasic
« Reply #328 on: January 21, 2015, 11:21:09 PM »
I don't think there is a debate here, some thing will be faster but some slower. The 2 versions of ZOOMIN are both UNICODE, one built with VC 2003 in 32 bit, the other with VC2010 in 64 bit.

Now while it certainly makes sense for the advanced work Patrice is doing, I mainly write tools these days and you pay every price in terms of size and performance in 64 bit when writing tools, especially those that have to work in non SSE data sizes. Most complex algos do not get faster in 64 bit but often get slower. I have written both 64 bit and 128 bit code in Win32 using later SSE instructions and in areas where streaming fits the task they produce some very high speed results but many tasks cannot be done with streaming instructions.

My beef is not with 64 bit, its the implementation of Win64. What I hope as the hardware gets better is to see that same type of shift we saw from the hybrid 16/32 Win95OEM to Win2000 that was close to full 32 bit. Shifting from the hybrid 32/64 bit of current 64 bit Windows to a full long mode 64 bit will see some big performance gains but only if the tools get a lot better and I am not going to hold my breath waiting.  ;D

Offline Frederick J. Harris

  • Hero Member
  • *****
  • Posts: 914
  • User-Rate: +16/-0
    • Frederick J. Harris
Re: Future of powerbasic
« Reply #329 on: January 22, 2015, 12:21:49 AM »
Here are some results from a nice little test I just ran from some work I was doing a couple years ago when the issue came up of PowerBASIC's speed in comparison to C and C++.  The interesting issue at the time was that some MSVC compilations were killing PowerBASIC in the same tests - by about a factor of 10!  When Paul Dixon disassembled the VC code he discovered a very interesting thing.  The compiler was examining the algorithm and determining it wasn't efficient, and it was re-writing it!  In other words - optimization!  The asm code generated by the compiler wasn't anything like the PowerBASIC code, which was just translating the sourse 'as is' into machine instructions.  So to make the comparison useful John Gleason suggested something more complicated than those little ditties all the compiler writers know about and hone their code against, which results in 'tainted' speed results.  Anyway, here's John Gleason's algorithm - slightly modified by me ...

Code: [Select]
// Exercise
// =======================================
// 1) Create a 2MB string of dashes;
// 2) Change every 7th dash to a "P";
// 3) Replace every "P" with a "PU" (hehehe);
// 4) Replace every dash with an "8";
// 5) Put in a CrLf every 90 characters;
// 6) Output last 4K to Message Box.

I'll shortly post one of my many C++ examples that implement this, but here are my results of 10 runs as follows...

Code: [Select]
x86 32 bit code
===================================================
32 bit ansi string buffers, i.e., 2,000,000 chars and 2,000,000 bytes      18.6 ticks
32 bit wide string buffers, i.e., 2,000,000 wchars and 4,000,000 bytes     31.5 ticks

x64 64 bit code
===================================================
64 bit ansi string buffers, i.e., 2,000,000 chars and 2,000,000 bytes      28.1 ticks
64 bit wide string buffers, i.e., 2,000,000 wchars and 4,000,000 bytes     45.2 ticks

Code: [Select]
Narrow  x86
===========

31
15
15
16
16
31
16
15
16
15
===
186  186/10 = 18.6 ticks


Wide    x86
===========

16
32
31
16
47
32
32
31
47
31
===
315  315/10 = 31.5 ticks


narrow  x64
===========

31
15
47
16
16
31
47
31
16
31
===
281  281/10 = 28.1 ticks


wide  x64
=========
47
47
47
46
32
46
47
46
47
47
===
452  452/10 = 45.2 ticks

As can be seen above, ansi is faster than wide character, and 32 bit is faster than 64 bit.  The slowest is unicode under native 64 bit, and the fastest is ansi in 32 bit mode.  I used the MinGW GCC x86/x64 compiler for the x64 compilations, and an older MinGW GCC 32 bit compiler for the 32 bit compiles.  Here is the source.  To compile for unicode just uncomment the defines at top...

Code: [Select]
//#ifndef UNICODE
//#define  UNICODE      //strCls34U.cpp
//#endif
//#ifndef _UNICODE
//#define  _UNICODE
//#endif
#include <Windows.h>  //for MessageBox(), GetTickCount() and GlobalAlloc()
#include <tchar.h>
#include <String.h>   //for strncpy(), strcpy(), strcat(), etc.
#include <cstdio>     //for sprintf()

enum                                              // Exercise
{                                                 // =======================================
 NUMBER         = 2000001,                        // 1) Create a 2MB string of dashes;
 LINE_LENGTH    = 90,                             // 2) Change every 7th dash to a "P";
 NUM_PS         = NUMBER/7+1,                     // 3) Replace every "P" with a "PU" (hehehe);
 PU_EXT_LENGTH  = NUMBER+NUM_PS,                  // 4) Replace every dash with an "8";
 NUM_FULL_LINES = PU_EXT_LENGTH/LINE_LENGTH,      // 5) Put in a CrLf every 90 characters;
 MAX_MEM        = PU_EXT_LENGTH+NUM_FULL_LINES*2  // 6) Output last 4K to Message Box.
};

int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevIns, LPSTR lpszArg, int nCmdShow)
{
 TCHAR szMsg[64],szTmp[16];             //for message box
 int i=0,iCtr=0,j;                      //iterators/counters
 TCHAR* s1=NULL;                        //pointers to null terminated
 TCHAR* s2=NULL;                        //character array bufers

 DWORD tick=GetTickCount();
 s1=(TCHAR*)GlobalAlloc(GPTR,MAX_MEM*sizeof(TCHAR));  //Allocate two buffers big enough to hold the original NUMBER of chars
 s2=(TCHAR*)GlobalAlloc(GPTR,MAX_MEM*sizeof(TCHAR));  //plus substitution of PUs for Ps and CrLfs after each LINE_LENGTH chunk.

 for(i=0; i<NUMBER; i++)                // 1) Create a 2MB string of dashes
     s1[i]=_T('-');

 for(i=0; i<NUMBER; i++, iCtr++)        // 2) Change every 7th dash to a "P"
 {
     if(iCtr==7)
     {
        s1[i]=_T('P');
        iCtr=0;
     }
 }

 iCtr=0;                                // 3) Substitute 'PUs' for 'Ps'
 for(i=0; i<NUMBER; i++)
 {
     if(_tcsncmp(s1+i,_T("P"),1)==0)
     {
        _tcscpy(s2+iCtr,_T("PU"));
        iCtr+=2;
     }
     else
     {
        s2[iCtr]=s1[i];
        iCtr++;
     }
 }

 for(i=0; i<PU_EXT_LENGTH; i++)         // 4) Replace every '-' with an 8;
 {
     if(s2[i]==_T('-'))
        s2[i]=56;   //56 is '8'
 }

 i=0, j=0, iCtr=0;                      // 5)Put in a CrLf every 90 characters
 while(i<PU_EXT_LENGTH)
 {
    s1[j]=s2[i];
    i++, j++, iCtr++;
    if(iCtr==LINE_LENGTH)
    {
       s1[j]=13, j++;
       s1[j]=10, j++;
       iCtr=0;
    }
 }
 s1[j]=0, s2[0]=0;
 _tcsncpy(s2,&s1[j]-4001,4000);         // 6) Output last (right most) 4 K to
 s2[4000]=0;                            //    MessageBox().
 tick=GetTickCount()-tick;
 _tcscpy(szMsg,_T("Here's Your String John In "));   //Let me clue you in on something.
 _stprintf(szTmp,_T("%u"),(unsigned)tick);           //You'll get real tired of this
 _tcscat(szMsg,szTmp);                               //sprintf(), strcpy(), strcat()
 _tcscat(szMsg,_T(" ticks!"));                       //stuff real fast.  It'll wear you
 MessageBox(0,s2,szMsg,MB_OK);                       //right into the ground!
 GlobalFree(s1), GlobalFree(s2);

 return 0;
}

I might add that a 2,000,000 byte string is kind of tight for using low resolution GetTickCount().  For real fast machines you might want to make the string 10 MB or whatever.


« Last Edit: January 22, 2015, 12:25:22 AM by Frederick J. Harris »