Mode Control

Opening and Closing Visuals

LibGGI is capable of handling multiple displays and/or viewports which we call "visuals". Each visual is completely independent of each other.

You can use these visuals to display on multiple monitors and/or in multiple windows or to work on "virtual" graphics devices like in-memory pixmaps or even PPM-files on disk.

ggi_visual_t ggiOpen(const char *display,...);
Opens a visual. This may represent a VT in fullscreen-mode, an X-window, an invisible memory area, a printer, ...
A visual is simply a thing you can draw on. It is identified by its handle of type ggi_visual_t which is given to all drawing functions which you want to operate on the given visual.
Please note, that the type ggi_visual_t is opaque to the user. Do not try to access any part of the structure directly. It may change without notice.

[ Ann.: ggiOpen does not set the focus to the new visual - use ggiSetFocus to set the focus. See the section below for the meaning of 'focus'.]
Return Code: A visual or NULL for error.
Example: see FIXME.

int ggiClose(ggi_visual_t vis);
Release the control structures associated with a visual and destroy the visual. This will close Xwindows, return KGI-consoles to text-mode, ...

If focus is on the closed visual, focus is set to NULL.
Return Code: 0 for o.k. or error code
Example: see FIXME.


Setting/Getting Focus - obsolete

int ggiSetFocus(ggi_visual_t vis);
Basically used for old libggi compatablity. Effectively obsolete.
The old libggi didn't have a notion of a visual (i.e. it was quite single-headed), so this function could be used to select a visual to work on with the old functions.
Return Code: 0 for o.k. or error code
Example: none - obsolete.

ggi_visual_t ggiGetFocus(void);
Basically used for old libggi compatablity.
Return Code: the currently focused visual.
Example: none - obsolete.


Setting/Getting Mode

int ggiSetMode(ggi_visual_t visual,ggi_mode *tm);
Set any mode. (text/graphics)
Use this, if you want something really strange that the specific SetModes cannot give you. You do not want to use this, unless you really know what you are doing and understand the values in ggi_mode,

int ggiGetMode(ggi_visual_t visual,ggi_mode *tm);
Get the current mode.

int ggiCheckMode(ggi_visual_t visual,ggi_mode *tm);
Check any mode (text/graphics). A returncode of 0 means that a setmode call would succeed.

int ggiSetTextMode(ggi_visual_t visual,int cols,int rows,
                   int fontx,int fonty);
Set a textmode with given coloums and rows and a font of the given size.

int ggiCheckTextMode(ggi_visual_t visual,int cols,
                     int rows,int fontx,int fonty);
Check a text mode.

int ggiSetGraphMode(ggi_visual_t visual,int x,int y,
                    int xv,int yv,ggi_graphtype type);
Set a graphics mode with a visible area of size x/y and a virtual area of size vx/vy (you can pan aound the virtual area using the SetOrigin command) and the specified graphics type.

int ggiCheckGraphMode(ggi_visual_t visual,int x,int y,
                      int xv,int yv,ggi_graphtype type);
Check a graphics mode.


Miscellaneous

void *ggiGetFB(ggi_visual_t vis);
Get framebuffer info. This is deprecated. Use ggiGetInfo or even better the directbuffer-interface instead.

const ggi_info *ggiGetInfo(ggi_visual *vis)
Get framebuffer info. The struct ggi_info is defined as follows:
typedef struct ggi_info {
        ggi_uint             flags;
        ggi_mode        *mode;
        ggi_info_fb      fb;
} ggi_info;
The flags are for setting the library into special modes allowing to do certain optimizations. Please read the section below for further information on that.

The mode is the current mode of the queried visual as ggiGetMode would return.

The fb entry describes the primary framebuffer exported by the device. It should be used only, if you know about the exact representation of data inside the buffer.
Programmers are encouraged to use the DirectBuffer API instead.

typedef struct ggi_info_fb {
        int     bpp;            /* Bits per pixel */
        int     bpc;            /* Bits for color per pixel */
        int     bpa;            /* Bits for alpha per pixel */
        ggi_uint    width;          /* Width in pixels */
        ggi_uint    height;         /* Height in pixels */
        void   *linear;         /* Linear framebuffer */
        ggi_uint    linear_size;    /* Size of framebuffer in bytes */
} ggi_info_fb;

int ggiSetInfoFlags(ggi_visual *vis,ggi_uint flags)
The following flags are define up to now. To set/unset single flags, do a Read-Modify-Write using ggiGetInfo and ggiSetInfoFlags.

Do not hit the given ggi_info struct directly. The lib will probably ignore what you have done then ... Might cause very strange effects.

The following flags are defined up to now :

#define GGIFLAG_ASYNC   0x0001
#define GGIFLAG_SYNC    0x0000
If you set ASYNC mode, LibGGI will cease to automatically synchronize the physical display with the logical visual you are drawing to. You have to do so explicitly by using ggiFlush().

This does not mean that no visible drawing will happen, but that it is not guaranteed. More complex GGI-native apps are expected to use asynchronous mode, as it might allow for faster execution and some optimizations.

[ Ann.: The average driver for a graphics card will probably always be in SYNC mode, except for maybe queueing accel-commands and executing them in a single gulp. However e.g. the X driver which draws at a hidden pixmap will stop flushing it to the window periodically. ]

void *ggiFlush(ggi_visual_t vis);
Sync the display with the drawing command queue. The call will block until the display is in sync with what it should look like after the given commands. You should do this whenever you think it is important that the user sees the current picture.