Plasma GitLab Archive
Projects Blog Knowledge

Plasma Project:
Home 
Manual 
FAQ 
Slides 
Perf 
Releases 

Plasma_release_0_5


Release Notes For Plasma

This is version: 0.5.2 "Eigensinn.2". This is a beta release intended for broader testing.

Changes

Changed in 0.5.2

Fixes:

  • Fix in Pfs_condition.wait_e. There was a race condition that could lead to problems with the named pipe.

Changed in 0.5.1

Fixes:

Changed in 0.5

New features in PlasmaFS:

  • Addition of a key/value file format (Plasma KV)
Implementation improvements in PlasmaFS:
  • The namenode can drive the database better. In particular, parallel commits are now enabled.
  • The "plasma fsstat" command outputs more information
  • The "plasma ls" can print more metadata fields
  • Datanodes can now be instructed to join multicast groups
  • Fix: anonymous inodes are first deleted when the last transaction accessing the inodes ends (previously: anonymous inodes were deleted when the transaction removing the last link ended)
  • Various performance improvements
New features in the map/reduce framework:
  • none
Implementation improvements in the map/reduce framework:
  • none
Compatibility:
  • Existing PlasmaFS filesystems are incompatible (db schema changes)
  • There are incompatible protocol changes

What is working and not working in PlasmaFS

Generally, PlasmaFS works as described in the documentation. Crashes have not been observed for quite some time now, but occasionally one might see critical exceptions in the log file.

PlasmaFS has so far only been tested on 64 bit, and only on Linux as operation system. There are known issues for 32 bit machines, especially the blocksize must not be larger than 4M.

Data safety: Cannot be guaranteed. It is not suggested to put valuable data into PlasmaFS.

Known problems:

  • It is still unclear whether the timeout settings are acceptable.
  • There might be name clashes for generated file names. Right now it is assumed that the random number generator returns unique names, but this is for sure not the case.
  • The generated inode numbers are not necessarily unique after namenode restarts.
  • Some namenode operations do not reduce the blocklimit metadata field when it is possible
  • It is not yet possible to limit the number of connections the namenode accepts. When it hits the OS limit, an exception will occur, and the namenode is in an inconsistent state. This is of course not acceptable.
  • Writing large files via the NFS bridge may result in performance problems when the NFS client does not respect the block boundaries.
  • There is still the security problem that allocated but not written blocks remain in an undefined state (information leak possible)
  • The recursive removal of large directory trees in a single transaction runs into a performance problem.
Not implemented features:
  • There are too many hard-coded constants.
  • The file name read/lookup functions should never return ECONFLICT errors. (This has been improved in 0.2, though.)
  • Support for checksums
  • Support for "host groups", so that it is easier to control which machines may store which blocks. Semantics have to be specified yet.
  • Define how blocks are handled that are allocated but never written.
  • Recognition of the death of the coordinator, and restart of the election algorithm.
  • Lock manager (avoid that clients have to busy wait on locks)
  • Restoration of missing replicas
  • Rebalancing of the cluster
  • Automated copying of the namenode database to freshly added namenode slaves
  • No IPv6 support yet.

What is working and not working in Plasma MapReduce

Not implemented features:

  • Task servers should be able to provide several kinds of jobs
  • Think about dynamically extensible task servers
  • Run jobs only defining map but no reduce.
  • Support for combining (an additional fold function run after each shuffle task to reduce the amount of data)
  • nice web interface
  • support user counters as in Hadoop
  • restart/relocation of failed tasks
  • recompute intermediate files that are no longer accessible due to node failures
  • Speculative execution of tasks
  • Support job management (remember which jobs have been run etc.)
What we will never implement:
  • Jobs only consisting of reduce but no map cannot be supported due to the task scheme. (Reason: Input files for sort tasks must not exceed sort_limit.)

This web site is published by Informatikbüro Gerd Stolpmann
Powered by Caml