Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Astrosynthesist

#1
I sent a PM with my email address. :)
#2
Hello everyone!

I am a huge fan of the Vectrex32 and I hate that I have been sitting on this code for a year and a half and that I have never completed the game. I want this platform to succeed in the Vectrex community, so I would rather give away my starter code than see it go to waste and never get completed. Please note that I stagnated development around the 1.15 release as there were some bugs I was helping Bob uncover at the time that limited my ability to complete the game... I don't think that it is so much of a problem now however. Unfortunately in my own tumultuous life I have not had the ability to pick the project back up - my Vectrex has been sitting sad and unplugged for a few months. :(

So! I have 360 degrees of star trek enterprise, klingon birds of prey, and star bases, as well as a pretty reliable game engine. All that's left is to make the mines and mine layers, and most of the game logic and sound. If you are interested in picking up this project from where I am leaving off please leave a reply and I can send you a nice big fat zip file with everything I have done so far. I am also happy to offer support on what I have written as my commenting was, well, poor.

I hope that someone is interested in picking this up and all I ask is that I get a credit in your final game, either in the basic code or preferably in the on-screen credits.
#3
Ah! Good point. Yes, I did mean the 12-bit notes. That's actually good because the top 4 bits can be interpreted as control digits like the aforementioned drum sounds or something.

In fact, I freely admit that this whole post came about because I wanted to be lazy and use the Play() function, however I didn't know that the Vectrex had a subroutine that you were just feeding directly into, so that's interesting information. So the idea of an advanced BASIC function is what I thought the concept already was. Now I get it, and yes it would be a very nice feature. :)

For clarity's sake now that I have a more complete understanding of what's going on, I am thinking about adding a sound update command into the main loop of the game I'm developing. If this idea takes off and we get a rich feature play() function that's great, but at least my program can provide an example of how to do it in a more traditional style way.

In fact I have never coded game music before, so MML was not on my radar! It looks very interesting. From the brief look I took at it I don't think that it traditionally contains control characters. Whether you decide to implement it using a version of MML or my suggested style above, I still think it's important to have control messages sent inline with the music. I think of it like having a serial MIDI interface between a keyboard with buttons and sliders (the program) and a sound module. When you play on the keyboard, MIDI sends Note ON and corresponding Note OFF messages. If you move a slider or press a button, MIDI sends controller messages over the same line. Since we are sequencing all messages then the only thing that's not possible with the idea I proposed is sending controller messages while notes are playing, although in practice for a sound chip like this that isn't so important.

Aaaaaaaand now I'm starting to get a really devious idea about hooking up a midi port to controller port 2 and letting people play the Vectrex........... Anyways that's a story for another day :)

Think of it like this:
An envelope should shape the length of one note, so from note "On", the attack stage  commences, executes to the decay stage, then executes and holds on the sustain stage until note "Off" is received (or the note times out in our case), and finally the release stage executes. If a note "On" is received before a note "Off" then the envelope retriggers the attack stage from whatever the previous value was (some people would say it starts from 0 again, it's a choice. If you start it from where it left off it's more like a classic synthesizer).
That's the best I can say, I don't quite understand the quandary between "one whole note" and "12 frames" if one whole note lasts for 12 frames.

Sorry for the slow reply, this and next week are a bit busy for me!

#4
Yes, although with the advent of code sprites it is theoretically possible to make the 6809 update the envelope faster than 30 times per second in basic. Definitely something that can be experimented with. I don't know how much work the 6809 is actually doing, if there are a lot of wasted cycles it might be possible to make the 6809 do envelope control exclusively using internal memory.

Yes, I could use a 16-bit integer and break it in two, but let me show you a few reasons why I think it would be better to keep it as a 16-bit integer for the sake of basic code.

If I were to design this from scratch, I would draw on your original implementation for inspiration: I like the idea of a play() and a sound() function each. I want to refine play() so that it becomes more customizable.
Firstly: The input to play() should be similar to the current implementation: play({{NA3, 10},{NB3, 2}}). The difference here is that now play() is expecting the note constants to be a number from 0x0000 to 0xffff. I still haven't actually determined what is what, but let's say for argument's sake that NA3 = 0xa3b2. If, for example, I want to slide into that pitch for an effect, I can write code that successively plays NA3-5, NA3-4, NA3-2, NA3-1, NA3, and thus I play notes that are not explicitly in tune, but glide into tune. This effect is commonly heard in NES tracks. In the case of sound(), if I can program R0 and R1 at the same time, then again I can apply simple math which allows me to easily and intuitively modulate between pitches. To be honest without a bitwise AND function I'm not even sure how to isolate the MSB from the LSB for each register in basic. Might be a silly oversight on my part but it's not immediately coming to mind.
Similarly, the play() function can accept the ABC function as an argument. The ABC function will work in a similar way as it does now, with one difference:
The programmer can code which voice is assigned to which note by making the ABC function accept nil as an empty note. So positionally you can say ABC(NA4, NC4, NF5) and voice A will play A4, B will play C4, and C will play F5. Alternatively you can say ABC(NA4, nil, NF3) and voice A will play A4, voice B will not play, and voice C will play F3. Thus the B channel is left reserved in the time period for the sound() command to use it to make an effect.
Finally the play() function will accept control commands. They can be constants defined above the note range and will affect parameters of the sound chip or special features. This is where the biggest improvement can be made, as well as the ability to implement features as they become available. They change things such as chip settings or software-controlled envelope functions and the second position of the array is the value associated with the setting. They can be placed inside a song array to change voice chip and music settings on the fly. Some examples for syntax:

play({{CTLASSN, CYC or SING}}) - sets the assign mode for monophonic note playing to either cyclical (A then B then C) or to single (A then A then A). In monophonic mode with no ABC function used, then A,B, and C all get their settings from voice A settings.

play({{CTLAMPA, 0x0a}}) - sets the amplitude of amp A to 10 (R8 -> 0x0a)
play({{CTLAMPA, 0x10}}) - sets the amplitude mode of amp A to internal envelope (R8 -> 0x10)
play({{CTLAMPA, 0x11}}) - sets the amplitude mode of amp A to software-defined envelope 1 (R8 is varied from 0x00 to 0x0f as defined by the setting of the software defined envelope over time)
play({{CTLENV1ATT, #frames to peak level envelope 1}}) - assuming that 30 frames per second allows for adequate envelope control and that we don't have to write  special assembly code to update envelopes between frames, we can start experimenting with this.
play({{CTLENV1PK, 0x0 - 0xf}}) - peak level of the software defined envelope, the maximum value achieved by the attack phase.
play({{CTLENV1DEC, #frames to decay from full amplitude to the sustain level of the envelope on envelope 1}})
play({{CTLENV1SUS, 0x0 - 0xf}}) - sustain level of the software defined envelope, the resting level of the amplifier while the note is being played
play({{CTLENV1REL, #frames to decay from current (usually sustain) level to 0 after note stops being played envelope 1}}) - can be cut off when another note is played if the other note is started using the same envelope such that the new note gets a full envelope starting from the attack stage
play({{CTLENV1TRG, RETRIG or SUS}}) - If the envelope is in the sustain stage and a new note is played before release mode is activated (two notes are chained one after the other), RETRIGger the envelope (to distinguish each new note) or maintain the SUStain level (to slur notes together)

There should be three software defined envelopes available, and potentially also software-defined LFOs (shape variable or simply triangular) which can sum with the mixer value (if mixer is set to a constant value OR an envelope controlled value) (not too important, can be added later for fun):
play({{CTLLFO1AMP, 0x0 - 0xf}}) - sets peak value to add/subtract from current audio level
play({{CTLLFO1FRQ, #frames for full cycle}}) - again assuming this is good enough resolution for now
play({{CTLLFO1TRG, RETRIG or RUN}}) - sets whether the envelope RETRIGgers at 0 for every new note or continues RUNning from its last value
Potentially: play({{CTLLFO1SHP, SQR or TRI or SIN}}) - change shape of LFO wave (0 centred)

There should be controls for the built-in chip envelope as well:
play({{CTLENVIPRD, 0x00 - 0xff}}) - Internal envelope period control
play({{CTLENVISHP, 0x0 - 0xf}}) - Internal envelope shape control

Finally, when the end of the notes array is reached the option to repeat the array should be given:
play({{CTLREPEAT, ON or OFF}}) - Repeats the play() array to allow for background music that does not end.

These control parameters inside the play function allow for on-the-fly voice or play modification and should all be parsed sequentially until the next note or ABC is found, at which point the voice chip gets programmed to play those notes for the next number of frames. Using a similar schema it would be fantastic to define control parameters for applying software envelopes, LFOs, and glide to the pitch values.
I am running out of time right now but if you want me to draft up some syntax for that I can in a little while. Envelopes and LFOs should be applied in a similar way, and glide just needs a time control, to slew from one note to the next. Glide will almost definitely need intervention by the 6809 as it will need as high a resolution as possible for fast glide times. This is another feature that can wait for now as it is not super important but would be very cool.

I currently don't see a need to implement noise using the play() command unless you wish to program in preset drum sounds to make it even more convenient... but I think that that is another long term goal. In the meantime we have to devise a way to allow drum sounds to be created outside the play() function and called inside it. The goal is for an entire song to be sequenced inside one play() array. I currently don't have an idea for how to integrate sound() into the ABC command, but maybe you can help me think of something. Something like a subroutine called snare_drum that contains only a sound() command with the necessary snare drum parameters, so it can be called repetitively. That doesn't currently seem feasible, but hopefully you can help me out on thinking of something.
There will also need to be interrupt handling for incidental sound effects, such that if game music is using all 3 voices, one voice can be superseded by a sound effect, and when that sound effect has completed the voice can then resume being used by the music.

This is definitely a LOT of work. Let me know how else I can help!
#5
I'm happy to!
I'm not wanting to be condescending here but I don't know which terms you are and aren't familiar with, so I'll write some things that can be used as a tutorial for anyone getting into music programming in general.
When I am talking about monophonic vs. polyphonic, it means playing one note at a time vs. playing more than one note at a time. So a monophonic melody would be like the typical music that comes out of a PC speaker, and a polyphonic melody would be something like the Xevious game start music.
Voice allocation is when you have multiple voices (A, B, and C on the Vectrex voice chip) and you are playing multiple notes and need to determine how to... well... allocate the voices. So, for example, I want to play the notes C, E, and G one after the other (a monophonic melody). I could allocate the voices as follows (allocation mode 1): Voice A plays note C, then voice B plays note E, then voice C plays note G. Or, I could allocate them this way (allocation mode 2): Voice A plays note C, then voice A plays note E, then voice A plays note G. This might seem like it doesn't matter in practice, but it will become important soon. Also note that there are many different ways to allocate voices but I am starting with these two to illustrate my point.
The envelope is the shape of the amplitude, or loudness, of the sound (in a simple case such as this, in synthesizers envelopes can be applied to all kinds of different things!). Because this is discussed in the manual, I will not go into detail about this one, but there is one thing to keep in mind. After a note is played, depending on the setting of the envelope, it might have to decay. Say the decay lasts for 1 second. With allocation mode 1, you hear an overlap as the C decays and the E plays, and then as the E decays and the G plays. This has the effect of slurring the notes together. Meanwhile with allocation mode 2, the notes don't blend together at all as only one voice is handling all of the notes. The decay portion of the note never occurs in the first two notes, it just switches right into the next note. C->E->G with the decay only occurring at the end of G. Both modes are valid, however it would be nice to choose between them.

On this chip you can only choose one envelope, and apply it to all 3 voices under one setting. Or, you can have some voices use the envelope, and others be manually mixed together at a fixed level using the amplifier. By modulating the amplifier levels over time you can create the effect of an envelope using software instead of the hardware on the chip. This is a little bit more resource intensive, however it shouldn't be a problem for the pic32. This allows you to create a different envelope for each voice, instead of having to use the internal envelope generator which would make every voice have the same envelope. Say you want to have two voices (A and B) playing a melody with a slow decay, and a single voice (C) playing a plucky bass line with a fast decay. This is possible with the use of software envelopes.
It gets even cooler than that because you can then use the now-free internal envelope to amplitude modulate the output at audio frequencies. This will create a tonal change, so that the output of the voice chip isn't just a square wave. Say you amplitude modulate a square wave at twice the frequency of the square wave. Let your envelope be a repeating sawtooth wave. Thus, every time the square goes "high", the envelope will modulate that square wave to the shape of the sawtooth (think of it as a logical "and". If the square wave is high, the output voltage is whatever the envelope voltage is. If it is low, then the output is low. Thus, for every "high" portion of the square wave, the amplitude envelope will appear at the output). When the envelope is repeating itself and forming a sawtooth or triangle wave, it is what is known as a low frequency oscillator or LFO. There are many synthesizer tutorials that can show you what an LFO can do when you put it to audio frequency. In this context, the LFO is modulating amplitude. In the context of the chip, that is the ONLY function the built-in envelope can affect.
Complex explanation link:
https://www.soundonsound.com/techniques/amplitude-modulation
Simple explanation link:
https://www.keithmcmillen.com/blog/simple-synthesis-part-9-amplitude-modulation/

In terms of note values, in the sound() command you need to set the high and low registers of each voice. I would love the ability to use 1 16-bit integer to set both at once. That way the note constants can have +/-1 added to them and it allows for fine pitch control instead of chromatic pitch control. In other words, the ability to finely adjust pitch allows for effects such as gliding between notes (portamento), varying the pitch using another software-defined LFO (vibrato), or bending notes (pitch bend). At the very least, it would be nice to have the note constants defined as such a 16-bit integer constant instead of NA2=1, NAS2=2, etc. I would recommend a new standard being developed for the constants so that the old programming method can be maintained for backwards compatibility.

In terms of the PCM stuff, I'll leave that one for now as I don't even fully understand it yet. I will investigate more of the nitty gritties in the future.

Please let me know if I can clarify myself further!
#6
Hello! I am back into developing again!

I am going to write a few observations on the current feature set for the sound engine on the Vectrex 32 v1.17, based on the development I've been doing so far.

The Play() function:
I am really happy that the Play() function has been fixed so that it doesn't play that interesting random set of tones anymore after completing! I haven't tested Play() polyphonically, however I have tried to make a monophonic melody with it. It was simple to do, and I very much appreciated that simplicity, however I personally feel it was almost too simple. A monophonic melody produced using the Play() function has what I perceive to be a bit of a flaw, but others might perceive it as a creative limitation. It appears to use dynamic voice allocation, allowing a previously played tone envelope to complete and starting the next tone on a new voice, creating an overlap of tones while the previous tone fades out. This is a perfectly valid voicing mode, however control over this behaviour would be excellent. In my case, I want a clear melody without voice allocation (using only voice A, for example), and the only way I can achieve this is with low-level control using the Sound() function. To summarize essentially I would love for there to be a new playMode() function which takes parameters for the following:
- Voice allocation mode when using less than 3 voices
- General envelope shape and length control (One setting for all voices to abide by using the chip's built in envelope)
- General amp control (To allow the creation of software defined envelopes for all voices)
Granted, I don't know how this is implemented in the backend but I am working on the assumption you have created voice allocation code in order to create this effect and have preset envelope and amp settings when using this mode.
Otherwise I will basically never use the Play() function except for the wonderfully useful Play(nil) option, opting for the low level control offered by Sound(), as I am a musician, and a synthesizer player at that, and thus I am a picky bastard.

On that note (hah, nice pun), I would really like for the built-in note constants to be mapped to hex numbers so that they can be used interchangeably between Sound and Play (Aka for Play to accept hex arguments instead of integer arguments), or for another set of constants to be created with this mapping. I am currently hunting down the perfect tunings by ear and slightly adjusting and fine tuning the hex values I'm using with Sound() and there has to be a better way to achieve temperament. At the very least include the pertinent hex values for each standard tuning note in the manual please!

Finally, a built-in function to use the chip as a D/A converter/PCM player as described here in the Advanced Techniques section https://www.revolvy.com/page/General-Instrument-AY%252D3%252D8910 would be super cool, however I realize that is likely to be a lot of work. Something to think about if you have some spare time to play with it in the future. If I have time I might try something simple as a proof of concept now that we have codeSprites to play with, but I have to get my current project out the door first!
#7
What about drawing the undesired dots with an intensity of zero?
#8
Also multiple notes overlap each other when called with the music function. I would personally prefer the ability to use ABC to specify which voice gets which note. In other words it should work such that "ABC(NG2,nil,nil)" always uses voice number 1 so that I can reserve one or both of the other voices for sound effects and low level control.
This will give away what I'm up to but anyways you can hear some of these notes overlap when they are played even though they should all be one after the other. It does this whether I use the ABC function or not:
tempo = 2
strekMusic = {{ABC(NA3),9*tempo},{ABC(ND4),3*tempo},{ABC(NG4),18*tempo},{ABC(NFS4),6*tempo},{ABC(ND4),4*tempo},{ABC(NB3),4*tempo},_
    {ABC(NE4),4*tempo},{ABC(NA4),21*tempo},{ABC(NA4),3*tempo},{ABC(NCS5),24*tempo}}
strekMusic = Music(strekMusic)
call play(strekMusic)
#9
This was not the case in 1.14 if I recall correctly but now whenever I call music using the "play" command (including running yankee.bas) random noise and notes are played at the end of the music. This does not seem to affect built in music, only ones which were created using the "Music" command
call play(music({{NA4, 12}}))
#10
Okay then, strong recommend a note in the manual then! :)
#11
What about using appendArrays to append the old array to {{moveTo,x,y}} or have BASIC recognize there is no moveTo,0,0 at the beginning of the array when the sprite is initialized and adding it before creating the sprite in the first place?
#12
I'm not certain.
What about inserting a moveSprite ahead of it in the drawing list if there is no initial moveTo?
#13
Two things:
In the Vectrex32 manual on page 63, the code example for "spriteTranslate" uses the call "spriteTranslation"
a = linesSprite({{DrawTo,50,0}})
call spriteTranslate(a,{10,10})

Since there is no initial moveTo, spriteTranslate only moves the end point of the linesSprite and not the beginning point of it, distorting the vector rather than translating it.
#14
call scaleSprite(32)
call dotsSprite({{0,128}})

If a coordinate is greater than 127 two dots drawn on screen, similar to how drawing a line longer than 127 creates multiple line segments to complete the draw. Similarly if a coordinate is greater than 255 then three dots are drawn on screen.
#15
Ah, that makes sense. Thankfully your description of how to fix the problem is still true. :)